Generative artificial intelligences are reshaping the ways in which we learn and work. As is often the case, the debate is divided between optimists and pessimists. Where do you position yourself?
I am reasonably optimistic that AI systems will overcome at least some of their growing pains, for instance, hallucinations. I am far less optimistic about their impact on work, on the economy, and on society as a whole. My sense is that the net effect, in employment terms, will be negative. And I think this situation will force us to revive an old slogan: work less, work all. In my view, however, shorter working hours should go hand in hand with more time devoted to continuous education, though understood in a new way. We have generally thought of lifelong learning primarily as professional upskilling for specific occupations. Instead, we should be developing a new generation of basic competencies, including many new forms of digital citizenship skills that, I believe, should be available to everyone. These programmes should involve schools and universities, both as institutions that help deliver lifelong learning and as key sites for many of the relevant learning activities. But I am not very optimistic that our social, economic, and political systems have either the capacity or the will to pursue this path.
A second point where I am not optimistic concerns the risks associated with the use of artificial intelligence, risks that are enormous and increasing. I am less concerned with science-fiction scenarios in which AI takes power, even though caution is warranted, these are systems we do not fully understand, and they can exhibit unexpected properties. For me, the main risks lie in the proliferation of mid-sized models, which more easily elude the regulatory and legislative approaches currently under development. Already today, anyone can run systems with between 20 and 80 billion parameters on mid-range computers, at relatively accessible costs. These are highly capable models that could be used – for just a few examples – to generate inappropriate content, to create computer viruses, or to produce instructions for manufacturing chemical or biological weapons.
At the end of 2023, your book L’architetto e l’oracolo. Forme digitali del sapere da Wikipedia a ChatGPT (The Architect and the Oracle: Digital Forms of Knowledge from Wikipedia to ChatGPT) came out. What was the prevailing outlook at the time, and what changes have you observed since then?
In 2023, the prevailing reaction was characterized by surprise. Only five years had passed since the publication of the influential paper Attention Is All You Need (2017), which introduced transformer models to the world – the family of AI to which ChatGPT belongs. At the time, we did not expect generative artificial intelligence to make such substantial progress in such a short time.
At the time, the main point of reference was ChatGPT, whereas today the range of systems has multiplied. We now have access to multimodal platforms, as well as deep-thinking and deep-research systems – a plurality of models with different natures and purposes. For this reason, it no longer makes sense to speak of artificial intelligence in the singular. Some of these platforms, as I mentioned, are addressing early shortcomings. For example, hallucinations have been greatly reduced through deep-research models and through RAG (Retrieval-Augmented Generation) techniques. We can say that deep research represents today’s state of the art in generative AI. However, most people still rely primarily on one-shot systems – or, in any case, on tools not selected for the specific tasks and needs at hand – and as a result, many errors persist. One of my more recent articles focuses on bibliographic hallucinations and shows how – when you ask for a bibliography on a specific topic – you can go from 100% hallucinations in some one-shot systems, which simply make everything up, to 0% when deep-research systems are used competently.

In your book, you propose a compelling metaphor: the distinction between “architects of knowledge” (the Wikipedia model) and the oracular model of artificial intelligence. Could you explain it?
The metaphor helps to distill two different modes of working. Ever since humans have produced knowledge, they have sought to systematize and organize it. This is where the architectural model comes from: the model of encyclopedias and, today, of Wikipedia, so to speak, which organizes a body of knowledge by giving it form and structure.
By contrast, the way generative AI systems produce knowledge is statistical and predictive, almost oracular. I do not mean to suggest that AI systems always tell the truth or predict the future, but rather that they rely on generative mechanisms that are, in technical terms, partly opaque to us. They operate on statistical-probabilistic foundations, through neural networks whose values, weights, and parameters we can inspect, yet whose outputs we cannot predict with precision. This is not something that should be frightening, because – if we reflect on it – the way we humans produce knowledge is also fairly opaque. We systematize our ideas, but where do they come from? They may depend on the moment or on the context, yet very often we are unable to describe their origin with exactness.
I believe that architecture and oracularity must work together – as they have already done in the past – to ensure that AI systems generate information and knowledge that can then be validated, verified, and structured. This kind of collaboration is, in fact, already taking place: we can see it in the increasingly widespread use of RAG techniques. One example is the Historical Archives of the European Parliament, which combine a generative engine to personalize output with an initial retrieval phase carried out by a traditional, reliable, and guaranteed information-retrieval system.
In any case, we are witnessing a growing number of systems that integrate generative capabilities with mechanisms for information verification. In deep research, for example, when an AI encounters on-line content that may be unreliable, it applies a form of reasoning by comparing and validating results from multiple sources. This helps the system assess the relative weight to assign to different sources before constructing its output and presenting it to the user. In this way, generative AI systems are learning to operate with the mindset of the architect: one who verifies, evaluates, controls, and produces highly structured information.
To what extent can these systems be considered creative, and how might this form of creativity enhance – or diminish – human creativity?
Many colleagues would answer this question negatively, but I would personally attribute a certain creative capacity to generative AI, simply because these systems do not “copy” or retrieve information from a database. For the first time, we have systems that do not merely assist us, but write texts and generate images or videos in our place. They produce original outputs and, in my view, are creative to some extent. This does not mean, however, that our own creativity is diminished – even when we make use of systems that are themselves, in part, creative.
Another issue on which we would welcome your perspective concerns the impact of artificial intelligence on human cognitive capacities. Do you think these systems could weaken our abilities?
This is a phenomenon that has occurred many times throughout history. Let me give an example: has the use of calculators weakened our personal ability to perform calculations? Certainly. When I was in school, I learned how to extract square roots using pencil and paper, an operation that almost no one knows how to do by hand anymore. Some tools, therefore, weaken certain individual abilities by externalizing them. It follows that, by fully delegating the writing of a text to a generative AI system, I will weaken my own capacity to produce a text.
Is the human being who uses generative systems less creative, less productive, and less cognitively engaged? The answer, I believe, depends on how they are used: when employed well and competently – as a form of support rather than substitution – they are tools that, on the contrary, can increase cognitive engagement.
I can only speak from personal experience: I work with generative AI tools on a daily basis, for instance when I’m writing an article. I don’t use them to write the text for me, which would be neither ethical nor conducive to quality, but to run certain kinds of deep research, to suggest strategies and priorities (which I then weigh against my own), or to act as a “discussant” on a draft, offering possible critiques and objections.
I don’t save time, quite the opposite. Writing an article now takes more effort and more work than it did in the past. The result, however, is often better.
It’s a different matter when we look at situations such as school and university exams and tests: more and more often, generative AI is being used to “cheat”, to produce answers in our place. Clearly, that is a problem, and one that risks lowering the quality of learning. Addressing it requires a radical rethinking of how we assess learning. If we continue to design assessments in traditional ways, they will increasingly be completed by generative AI systems. We need to change our methods, and perhaps the time has come to do so.
In this regard, are you providing your students with guidance on how to use generative artificial intelligence?
The basic guidance is this: they need to know very well how to use these systems, otherwise the quality of their work will deteriorate. If a student brings me a thesis draft that contains hallucinations, it means they have not understood how these tools work. They should then learn which system to use in each specific situation, and finally understand that the output must always be evaluated critically.
My approach is to have students work extensively on group projects and – if they use generative AI – I ask them to specify the prompts they submitted, the contexts they used, and to discuss their choices with one another. I have different groups work on the same task with different prompts and contexts, and then I ask them to compare the results to see which approaches worked best.
This kind of work strengthens students’ skills and accustoms them to the idea that there is no single, always correct answer, but rather a plurality of possible answers – sometimes wrong and almost always incomplete. Much depends on how we frame questions and on the content we provide as context. It is necessary to become accustomed to working carefully with informational contexts and sources. I do not believe, however, that banning the use of generative artificial intelligence is of much use. It would be like telling students twenty or thirty years ago not to use the Internet. This is not simply an attempt to resist change; it amounts to the educational system abdicating its responsibility to explain how to use the tools of the contemporary world in a critical and informed way. If we ban generative AI – or ban the use of the Internet – we achieve the same result: we weaken, rather than strengthen, the skills that are concretely required not only by the labor market, but more broadly by contemporary society.
Do you think artificial intelligence is reshaping the form of “our” intelligence, that is human intelligence?
Our intelligence is constantly changing; what does not change – or does so much more slowly – is the way our brain functions, which is essentially the same as that of a Homo sapiens from thirty or forty thousand years ago. What is new today is that we are increasingly externalizing certain activities, no longer limited to memory but extending to reasoning and content production as well. The part of the brain we might imagine as a kind of “extended mind” is expanding and evolving at an ever faster pace.
So it is not the nature of our intelligence that is changing, but the way we use our brain – and, above all, the way we use the many extensions of our mind, which by now, in one way or another, have become part of our intelligence.

What should we expect from artificial intelligence going forward?
Difficult to say. In this regard, I was particularly struck by the results of a study in which 480 expert researchers in the field were asked whether these systems truly “understand” language. Four possible answers were offered: definitely yes, probably yes, probably no, and definitely no. The experts were distributed almost evenly across all four options. This shows that there is no consensus: we are facing a new situation, and there is still much we do not know about the possible trajectories of artificial intelligence. These systems still have plenty of room to surprise us, they have done so up to now, and I have the sense that they will continue to do so.
