Extended Intelligences to Serve Real-World Problems

In a fragmented world, extra-large expectations emerge from signals and practices already taking shape. In the domain of intelligences, this means moving beyond the race for benchmarks and grounding AI in real problems, and in networks of human and non-human agency. The anti-problem method and extended cognitive hygiene help prevent workslop in agentic spaces.

In a fragmented world, where grand narratives have given way to multiple perspectives and continuously shifting contexts, expectations are no longer forecasts but emergent properties. They are elements of reality that arise from observing patterns, cross-reading signals, and the convergence of practices that may appear far apart. We call them extra-large expectations because they transcend traditional boundaries – geographical, generational, disciplinary –and become tools for navigating complexity.

In the domain of intelligences, this emergent expectation is taking on a new form. The point is not to approach artificial intelligence as an alien mind competing with the human, but as a network of agencies – human and non-human – interacting with one another. This book is an invitation to step away from the race for benchmarks pursued in the quest to claim supremacy as “the best AI,” and to ground ourselves instead in real problems that we – as people and organizations – can address in hybrid, collaborative, situated ways.

When, in 1998, Andy Clark and David Chalmers proposed the theory of the extended mind, they were in fact describing something human beings have always done: distributing intelligence beyond the confines of the skull. From a stick probing the depth of a river, to a post-it on a desk, to the use of the Internet. Because we have always been natural cyborgs, creatures that extend themselves into the world to expand their possibilities.

And this is where the extra-large expectation takes shape: artificial intelligences do not replicate the human, they generate something unprecedented. One example is the famous “Move 37,” with which AlphaGo defeated the world champion of Go in 2016. An event widely read as the definitive surpassing of human capabilities in the most complex strategic game ever created. Yet if we leave the terrain of competition, “Move 37” appears as a new achievement: a new solution, part of a strategy that a human being – alone – could never have conceived. Indeed, by traditional standards, it would even have been considered foolish. And yet – thanks to the contribution of an AI – it expanded the canon of possibilities in the game of Go.

Moving away from false problems

There is something paradoxical about the age of generative AI. We have access to tools of seemingly limitless power, and yet we use them primarily to optimize the existing: writing emails, creating logos, summarizing documents. Nothing wrong with that, but it risks leaving us with the illusion that we have transformed the way we work, when in reality we have only made micro-activities more efficient. A race toward numbers – how much will it save? How many jobs will become obsolete? – that distracts us from what truly matters. If we limit ourselves to analyzing today’s needs, we inevitably fall into this trap: technology sets the agenda, and we chase applications.

How do we get out? With a counterintuitive method: the anti-problem. Instead of asking, “How can we use AI to increase productivity?”, we reverse the question entirely: “What would make AI completely useless for our work? What would be the worst possible way to integrate it into our processes?”

This inversion helps us move away from false problems – those generated by a technology in search of applications – and reconnect with reality. Because extra-large expectations, as we said, are not fantasies about the future but emergent properties of the present. And to see them, we must stop looking where everyone looks. Then we discover that the worst way to use AI is not “to use it too little,” but to use it poorly: to delegate without verifying, to automate without understanding, to produce without substance.
It is a matter of expectations. What do we truly expect from AI? And what do AI systems, in turn, “expect” from us? It means recognizing that AI has entered our relational space.

Plural intelligences and agentic spaces

As Gino Roncaglia notes in his contribution, today we have at our disposal a plurality of systems with profoundly different natures and purposes: multimodal platforms, deep-thinking models, advanced research systems. It is a constellation of intelligences operating in increasingly complex and interconnected ways.

We are moving toward agentic spaces – environments in which AI systems autonomously chain multiple tasks, orchestrate sub-processes, and delegate work to specialized agents. This proliferation raises a daunting question: how are human-AI interactions really changing? The Anthropic Economic Index, which analyzes millions of real conversations, points to a striking shift. Between December 2024 and August 2025, “directive” conversations – where users delegate entire tasks to the AI – jumped from 27% to 39%. This is a turning point that anticipates what comes next: automation has overtaken augmentation. In other words, people are no longer using AI primarily to explore together or learn iteratively; increasingly, they assign a task and expect it to be completed autonomously.

This is where the problem of miscalibrated expectations emerges. If AIs are becoming more capable, if automation is increasing, if companies are investing billions, why did a recent MIT Media Lab report find that 95% of organizations see no measurable return from their AI investments? These numbers do not mean that AI “doesn’t work”; they suggest that we are confusing adoption with impact.

And this is how workslop is born. The term – coined by researchers linked to Stanford and BetterUp Labs – describes AI-generated output that “masquerades as productivity but lacks substance.” Polished slides filled with jargon and no content. Reports that look professional but require hours of work before they become usable.

This phenomenon tells us something essential about expectations. Organizations expect AI to boost productivity, yet they send contradictory signals: use it all the time, do it fast, delegate everything. The result? People copy and paste output without verification, pushing the cognitive burden downstream. It is not AI’s fault – it is the fault of poorly calibrated expectations, the wrong metrics, and the confusion of adoption with value.

That is why we need “cognitive countermeasures”, which may become new meta-skills to learn. Because extended intelligence works only if we know when to trust, when to doubt, when to delegate, and when to retain control. Because agentic spaces can become capability multipliers – or workslop factories. And the difference lies in how we inhabit them.

Looking where no one else is looking

There is another perspective we risk missing, enclosed within our Western technological bubble. Payal Arora, a digital anthropologist who studies AI in the Global South, shows what happens when artificial intelligences are applied to real problems, with a pragmatism the West has partly lost.

These examples remind us of something essential: when AI is rooted in specific contexts, when it addresses tangible challenges – such as mobility, education, health, inclusion – it becomes truly extended intelligence. It does not replace the human; it amplifies the human capacity to act in the world. It does not chase abstract metrics; it measures its value in people’s lives.

Extended minds require extended cognitive hygiene

The extra-large expectation for intelligences, then, is a collaborative extension of human thought, one that allows us to become fully what we have always been: situated intelligences, capable of thinking through others and with others.

But this expectation – like all emergent properties in fragmented contexts – does not realize itself. It requires an “extended cognitive hygiene”: learning what to delegate and what to keep internal; how to formulate questions that maximize the value of interaction; when to trust and when to doubt. It requires conscious interaction protocols, as Cabitza teaches us. It requires looking beyond our geographic and cognitive borders, as Arora urges us to do. And it requires understanding the plurality of systems we are dealing with, as Roncaglia explains.

Magazine

XL Expectations. Value Pathways in a Fragmented World
Issue 17

XL Expectations. Value Pathways in a Fragmented World

Weconomy 17 is not a linear journey; it is an ecosystem of connections. Across five domains – demographics, organizations, aesthetics, intelligences, and measurements – we gather fragments, perspectives, and practices to understand XXL expectations and translate them into micro-experiments, meaningful connections, and new metrics for change.

Author

Vincenzo Scagliarini

Vincenzo Scagliarini

Professional journalist with a humanities background and a geek at heart. Since 2018, he has been Editor-in-Chief of the Weconomy project