Interacting with AI to (Re)Discover the Value of Incompleteness

With generative AI, the relationship shifts: from tool to interlocutor. Five interaction protocols – exploratory, verificatory, delegative, interrogative, and dialogic – help make practices and responsibilities explicit. In organizations, this calls for transparency, a shared culture, and generative leadership.

How might the way we interact with digital machines change after the widespread adoption of generative artificial intelligence systems? We asked Federico Cabitza, Associate Professor of Human-Computer Interaction and Decision Support at the University of Milano-Bicocca. In 2021, he published a book on the subject with Luciano Floridi – Intelligenza artificiale. L’uso delle nuove macchine (Artificial Intelligence: The Use of New Machines) (Bompiani) and he has authored more than 200 scientific publications to date. This body of work has consistently placed him among the world’s top 2% of AI scientists, according to the annual list published by Stanford. We publish below the full interview we conducted with him.

You describe yourself as an interactionist – someone who studies how human and AI agency enter into a relationship with one another. It is a compelling shift in perspective, because we believe the mainstream narrative around generative AI has focused too heavily on “outputs” and benchmarks, and not enough on the new forms of connection that can emerge with this new colleague. In that respect, what positive human-AI relationships are you observing?

I describe myself as an interactionist because I do not think that the real discontinuity introduced by generative AI technologies lies in their ability – extraordinary as it is – to generate texts, images, or more-or-less accurate decisions. What is changing, and what I am interested in observing, is the nature of the relationship that takes shape between humans and machines – or, more precisely, between people and intelligent digital systems. We are working with something that no longer merely executes, but that appears to “understand” us, at least at a superficial level, and that responds not only to our questions but also to our attitudes and, I would say, even our intentions.

Some of the most promising interactions I have observed occur when AI is not experienced as a tool, but as an interlocutor. A doctor who asks a system, “Why did you propose this diagnosis?”, or a teacher who explores with the AI different ways of explaining a concept to students with different characteristics. In those moments, it is not merely a matter of use, but rather of dialogue, and therefore of comparison, and of a certain degree of mutual alignment and learning.

This approach strongly recalls the work of Clifford Nass and Byron Reeves in the 1990s, who already showed that we tend to treat computers as if they were people – attributing intentions, mental states, and even emotions to them. Today, with generative AI, that dynamic intensifies. It is not merely a cognitive error: it can become an opportunity to build new kinds of relationships – perhaps more reflective and even more human.

Another interesting point is that there is no single strategy when it comes to “reliance” on AI. What matters is adaptation to context. On this basis, you have developed five interaction protocols…

Yes, by reliance I mean the way we entrust tasks, judgments, or forms of support to AI systems. Relying on AI is not a single, uniform act. There is no one “correct” strategy, but rather different modes of engagement and interaction that we can activate depending on the task, the context, and the state of mind we are in.

I have identified at least five modes, which I call “interaction protocols.” The first is an exploratory one – when I use AI to generate ideas, stimuli, or alternatives. It is a form of distributed ideation, useful when the problem is not yet well defined.

Then there is a verificatory protocol. Here the AI’s role is closer to that of a reviewer or a more experienced colleague – a kind of mentor. I ask it for confirmations, counterexamples, or to challenge my hypotheses.

A delegative approach is when I rely on AI for repetitive or technical tasks, entrusting it with execution and detail while retaining control over output quality – accountability for which ultimately remains with us.

Then, there is an interrogative protocol, in which the AI becomes an object of questioning rather than merely a source of answers. I ask it for explanations, lines of reasoning, alternatives, and prompts for reflection.

And finally there is the dialogic protocol, which is the most sophisticated. Human and AI move forward together, each contributing to shaping the direction. It more closely resembles an interaction between colleagues, in which growth happens jointly.

Ultimately, this range of stances echoes the reflections of Thomas W. Malone on the future of hybrid work. At the MIT Center for Collective Intelligence, he has explored how groups composed of humans and AI can collaborate in new ways, where not only expertise matters, but also how coordination, negotiation, and co-evolution take place.

You argue that watching an AI become “better than us” can be demotivating. This is a particularly urgent issue, given that many research institutes report that engagement in the workplace is already very low. In what ways, then, can human-AI interaction become engaging – and even enjoyable?

One of the greatest risks I see – and one that concerns me deeply – is that comparing ourselves with a “better” AI may generate frustration or resignation. For this reason, I strongly oppose the narrative that constantly pits us against machines and deliberately places them in competition with us in every domain. We often come out badly from these comparisons, which are frequently artificial. So if a machine writes better, decides faster, remembers everything… what is left for me to do? Do you really need me at all?

But it does not have to be this way at all. In many contexts, I am observing the opposite: AI can restore motivation. When it is used to enrich one’s perspective, generate ideas, or challenge one’s cognitive habits, it can become a tool for personal growth – helping us move a little closer to the best members of our reference community, or to those we admire within our work teams.

There is also a playful dimension that should not be underestimated. Many people tell me that “chatting” with AI stimulates creativity, curiosity, and even a sense of wonder. This is something Nass and Reeves had already anticipated in their book The Media Equation: if technologies behave in socially credible ways, we can establish engaging – even emotionally engaging – interactions with them. And today, with generative models, that theory has become part of everyday experience.

You argue that there should be collaboration between AI and humans. Yet we see that generative AI is still designed primarily for individual use (modelled on productivity suites). Do you think there is a way to strengthen human-human collaboration as well, and not only human-machine interaction?

An interesting paradox is that generative AI, born to empower the individual, could become a powerful enabler of collaboration.

Today we use these systems mainly in solitary ways, within personal productivity suites. But it does not have to remain that way. AI can help us better understand a colleague’s position, synthesize perspectives within a team, facilitate coordination across different departments, or moderate and summarize a meeting – or what happens around a working table.

I believe one of the most fertile directions lies precisely here: using AI as a cognitive mediator, as a tool for negotiating meaning more effectively between human beings. After all, that is what any good translator, facilitator, or coach does: it helps people understand one another. And in my view, there is a largely unexplored potential in machines designed specifically for these functions.

Learning at My Own Pace, Or Better Yet, at AI Pace

Logotel insight by Daniele Cerra – Partner Chief Innovation Officer

Community is the most flexible and dynamic context for supporting the development of each professional’s potential. From a learning perspective, our brains are especially receptive when what we are learning responds to an immediate, specific need. AI – through classic chatbots reshaped as coaches and tutors – makes content more accessible and offers personalised, hands-on interactions, tailored to context and capable of meeting the diverse needs and learning styles of individual members.

It Wasn’t a Noticeboard. It Was a Dojo. And Inside It, Technology Learned to Speak with People

Logotel insight by Matteo Ordanini – Senior Learning Designer

We created a Dojo for Microsoft Copilot adoption: not a technology platform, but a digital space to train, exchange ideas, and reflect. Here, even those with fewer tech skills drew on the community’s energy to develop agents able to measure the impact of DEI initiatives and reduce the hard costs of procurement. Others – professionals who tend to keep a low profile – surfaced to their managers’ surprise as talents and innovators. The community was not a container. It was a living organism. And us? We brought content, sparked ideas, and watched the impact take shape. It wasn’t just technology. It was transformation.

The introduction of generative AI into organizations also depends greatly on leadership – on the people who make and share decisions. From this perspective, what changes would you like to see in leadership practice?

The phrase “secret gardens” is very evocative: it reminds me of the expression Ethan Mollick uses for a very similar phenomenon, the “silent cyborg” – the cyborg who hides and does not like to reveal itself. Introducing AI in companies is primarily a matter of governance, not just technological adoption, and governance takes shape through the different ways leadership is exercised – namely, the strategies that each manager, executive, or leader deploys to guide and manage.

Today we see a curious but worrying phenomenon, especially given the associated compliance and cybersecurity risks: many professionals use AI every day, but they do so “in secret,” outside official policies. These “secret gardens” are a symptom of a lack of trust – or of a lack of safe spaces where experimentation is possible and, indeed, valued and encouraged.

What is needed is leadership capable of recognizing that AI use is already taking place, and that it cannot be controlled only through bans or more-or-less rigid guidelines. It is necessary to build culture, provide tools, and accept that experimentation – when shared – is also an opportunity for organizational learning.

A generative form of leadership should encourage transparency, dialogue, and the thoughtful use of AI as a lever for collaboration, not only for individual efficiency.

What makes you optimistic about where generative AI is heading?

I deeply believe that we need an “urgent optimism,” as the theme of this issue of Weconomy puts it. Not naïve optimism, but lucid and informed, critical and reflective as well.

This stance can be grounded in a simple observation: generative AI forces us to rethink what it means to be intelligent, creative, and competent. And that is a major opportunity.

We are discovering – or rather rediscovering – the value of judgment, explanation, and responsibility. But also the beauty of incompleteness, ambiguity, and the negotiation among different points of view.

If we succeed in designing AI not only to “do better,” but to think better together, then perhaps we will be able to say that this technology has not replaced us, but transformed us – and has brought us back into relationship with one another.

Magazine

XL Expectations. Value Pathways in a Fragmented World
Issue 17

XL Expectations. Value Pathways in a Fragmented World

Weconomy 17 is not a linear journey; it is an ecosystem of connections. Across five domains – demographics, organizations, aesthetics, intelligences, and measurements – we gather fragments, perspectives, and practices to understand XXL expectations and translate them into micro-experiments, meaningful connections, and new metrics for change.

Author

Federico Cabitza

Federico Cabitza

Associate Professor of Human-Computer Interaction and Decision Support Systems, University of Milano-Bicocca.