In your book From Pessimism to Promise, you focus on the different approach that the Global South has towards artificial intelligence, which as you yourself state is no longer a technological issue but has entered our lives. What are the signs of optimism coming from that part of the world?
Many of the most important issues concerning artificial intelligence are social rather than technological in nature. How do we build trust? What is authenticity? What does fair ownership mean? These are questions that shape regulation, tool design, value redistribution, and legal and ethical approaches to AI.
The book is the result of working with hundreds of organizations of different types and my experience on about 15 boards of directors, from UN agencies to the World AI Summit which is very connected to Silicon Valley actors. Thanks to this global perspective, I’ve identified a significant difference between the West and the rest of the world in how they perceive AI.
Western organizations and governments see AI as something that controls us, that can destroy our democracy, put mental health at risk, and even threaten our existence as a planet and humanity.
These beliefs are leading the West toward what I call “pessimism paralysis,” which leads to substantial powerlessness regarding how to proceed, apart from asking how to resist, contain, and control AI. This explains the regulatory approach we see particularly in Europe.
The rest of the world, on the other hand, is extremely optimistic about artificial intelligence. Countries like India and Brazil conceive AI as a way to address chronic problems, applying it to concrete situations. In Peru and Ecuador, they’re understanding how to use AI in targeted ways to improve public transportation, thereby reducing energy inefficiencies.
Another example comes from South Africa, which is using AI to diversify educational materials in various languages and dialects, so that children can finally have accessible educational resources. It’s a tangible and concrete case. They don’t need large language models, but small and domain-specific models, built in their languages and dialects. They’re developing them first because the barriers to entry are much lower than a few years ago. A technological leap is happening: you no longer need to be a programmer to build systems that work for you.
The same is happening in the Philippines and East Asia: they’re pioneering different systems that are moving away from Western infrastructure and platforms and are contributing to outlining a future where I believe there will be much more decentralization.
It’s interesting to see how other countries are applying AI to real problems, like new ways to rethink education or mobility. And it’s something we’re actually forgetting to do in the West because perhaps we’re no longer focusing on the big challenges. Following the example of what the Global South is doing, what are some concrete examples that can help us rethink some of the problems we need to face?
AI is still just a tool. People think it’s a huge game-changer that will radically transform everything, but we’ve already gone through this phase with every single new technology of the past.
When the light bulb was invented, Edison said it would radically transform education and that we would no longer need classrooms. The same happened with the telephone: it was said it would drastically reduce time with our families and depersonalize relationships.
Even writing, speaking of education, was seen as the ruin of our intellectual abilities. At that time, memory was important: if you could recite entire works by heart, you were considered an intellectual.
This is exactly the debate happening now, where we question whether delegating text production to ChatGPT will lead to cognitive decline.
The truth is that we tend to overestimate what technology can do and underestimate what humans do with these tools.
AI, essentially, pushes us to optimize our skills and rechannel them. Some abilities may be lost or decline, as happened with the ability to do arithmetic operations with the advent of the calculator. But having forgotten how to do mental arithmetic doesn’t make us idiots.
Concretely, what AI is doing is significantly reducing workload. A practical example comes from the world of education. Recently I participated in a UN conference involving various EdTech groups developing software in Europe. Part of their work is thinking about new solutions in a context – that of our continent – where there’s a shortage of teachers, and moreover those in roles are burned out and overloaded by many challenges, ranging from socio-cultural and linguistic diversity to personalization needs.
On one hand, our expectations about the value of education are growing; on the other, teachers’ capacity is limited by the complexity of these challenges.
Looking at my experience in university education, I use AI a lot to build my curricular courses, brainstorm about what readings I could suggest, create rhetorical questions, suggest workshops, and write.
What used to take a week now takes a day, which means I might not have to experience the burnout felt by colleagues, because I can focus more on the actual content, on what excites me.
I think this is very positive and it’s happening with other professional figures too: AI helps perform some basic activities allowing people to focus more on deep content, on what they’re passionate about. I think what AI can do positively is help us reshape the way we work in new forms, more suited to our aspirations.

In some of your reflections, you’ve questioned the word “innovation” itself. Often we try to innovate for the sake of innovation, to create something new. But when we face a real problem, it needs to be solved: it’s a matter of accessibility, sustainability, caring elements…
We must first recognize that there’s a double standard in how we use the term “innovation.”
In the West, whether it’s Silicon Valley or Europe, innovation is linked to the hero archetype, like Steve Jobs or Elon Musk. Often, moreover, something is defined as innovative when it actually isn’t. Take Elon Musk: he didn’t invent Tesla, but a new industrial process for making electric cars. And his vision for the future of X mirrors what WeChat currently is, which has existed in China for more than a decade.
Yet WeChat isn’t given the same innovative weight: the Sino-American “war” itself exemplifies this double standard between who gets defined as an innovator and who gets labeled as an imitator, like China.
We continue to be surprised by China’s achievements – the latest case was the DeepSeek LLM – yet they’re ahead of us in various fields, from fintech to solar energy, from autonomous vehicles to electric ones.
What’s interesting is that the Chinese think differently because they’re forced to. The US model, in fact, isn’t scalable. Indiscriminate data collection isn’t aligned with European values and not even with our urgent sustainability agenda to address the climate crisis, because if we continue at the pace of US models, we’ll have no water or electricity left.
Other Global South countries are forced to pioneer for their very survival. Take India: it’s extremely dense and crowded. If you build a data center in an area, you’ll create competition with villages in the same area and risk sliding toward civil war. That’s why India doesn’t use water to cool data centers: they have to think in terms of solar energy and find other ways to become sustainable.
Many African countries are creating technologies that don’t rely on big data to function. Not for grand environmental or planetary ideals, but for pragmatism: resource scarcity in the Global South creates new waves of innovation. And these are models we Westerners should look at with interest, given that we’re the biggest consumers of these energy resources.
Besides recognizing this double standard when talking about innovation, we need to reflect on the fact that there are innovations that put sustainability at the center of their mission not for altruistic reasons, but because it’s the only way they can take root in certain markets.
The Anghami case is emblematic [It’s a rapidly growing music streaming platform, founded in Lebanon and used mainly in the Middle East and North Africa, ed.]. It was born to address copyright infringement with real innovation. While the Western approach to piracy in the Global South was to invest millions of dollars to regulate and punish. But this approach never worked in Latin America, Asia, or Africa because, for example, going to the cinema in Cape Town costs as much as an average salary. That’s why the piracy market became dominant, the norm.
Anghami developed a system recognizing that people won’t simply pay for music, but for added value. First, they thought about how to convince creators to put their music on the platform instead of pirating it and started paying musicians directly, unlike Spotify, which always relies on record labels.
Second, they started using AI to detect copyright infringement and for proper attribution of rights and value.
Above all, they’re creating added value by building a community space, an ecosystem where people come together to interact with each other. This community and convergent conception of digital platforms is a usage modality very widespread in East Asia. In the West, instead, platforms are very separate ecosystems: there’s an app for music, an app for ordering food, an app for something else.
In this case too, I believe there are many lessons to learn from the divergent approaches of the Global South for our future, especially if we want to be responsible: we must respect the resources we have.
Speaking of creativity and value generation, what do you observe in terms of generational and intercultural dynamics?
There’s certainly a significant generational gap. The older generation acts driven by fear and operates according to conservative logic.
The generation that grew up with digital tools, instead, reflects less on the concept of ownership and more on how to obtain fair value from their work, especially in the context of remix culture, which is already an integral part of young people’s lives on TikTok and Instagram.
On these platforms, people use templates and narratives created by others, to which each creator adds their own creative contribution. Including AI as an additional creative tool doesn’t represent such a big conceptual leap. However, the question shifts: it’s no longer about who owns what, but how to ensure fair attribution of value.
Young creators, including those from the Black community on TikTok, ask for transparent attribution systems: you can reuse content, but you need to recognize the source, even when it’s AI-generated. In this, blockchain already offers technological possibilities to redistribute value based on visibility and added value that each content generates.
The problem, in this case too, is not technological but social: if we don’t first commit to imagining a new paradigm of value redistribution, we won’t be able to implement already available technologies. The real question is how we approach these platforms.
Recently I moderated a panel with the winners of the AI Film Festival in Amsterdam. It was fascinating to discover that most of them don’t consider themselves creative in the traditional sense. Among them was a Hispanic actress, tired of the usual stereotypical roles, who decided to become a filmmaker to write better stories for people like her. She entered the AI cinema world precisely because you don’t need traditional filmmaker skills.
The festival director said he was surprised by the participants’ profile: single mothers without time to shoot six-month documentaries in the field, people with backgrounds far from traditional filmmaking, each with their own unique motivations that, thanks to AI, could be transformed into something practical.
This democratization of cinema threatens the consolidated powers of the film industry, which has operated for decades systematically excluding many voices. And it’s clear that the traditional establishment sees AI as a risk. We should instead ask ourselves: is it positive for society? Probably yes, because it allows anyone to express themselves and tell stories with unprecedented perspectives without prohibitive costs, also making these contents accessible to a much wider audience.
This is another interesting perspective, which connects to what Fei-Fei Li says about the importance of thinking about motivation before the applications of a technology. You mention in your book the example of Kenya, where smartphones are used as walkie-talkies for communication because there’s no common alphabet. Freedom of expression is a strong motivation, as is communication. What are other motivations that drive toward a different use of AI?
A strong motivation can arise from the desire to build solidarity. Think of Mexican activists and many movements worldwide operating in a context of growing democratic decline, where authoritarian figures are elected everywhere and communication restrictions intensify continuously.
These digital activists have understood that the struggle will be long and have consciously chosen not to be visible heroes. Their strategy is anonymous decentralization, because the costs of visibility are extremely high – just look at how many Iranian or Hong Kong protesters ended up in prison. AI offers them valuable tools: they can use artificially generated images to keep the movement alive without exposing real people to danger. It’s ethical activism designed to last over time.
Another interesting example comes from a conversation with a group of Berlin artists preparing for a major exhibition. They contacted me because they’re exploring AI’s impact on our bodies and physicality.
They realized that their narratives always tended toward the dystopian, leaving the audience with a feeling of heaviness and discomfort. They wanted to do something different: use these tools to build, not to destroy. It’s an important change, because AI can facilitate truly exciting forms of immersiveness and build empathy. Through virtual reality enhanced by AI, we can create vivid and deep emotional connections.
There are therefore multiple ways to use these tools innovatively and impactfully: as means of creative expression; as tools of ethical activism in the case of political forces; and as new communication modalities that allow us to build for the future. Because ours is a long-term vision.
Speaking of the Global South, an interesting aspect that many don’t know is the India Stack, an open-source state digital infrastructure, designed to be replicated and shared globally. The approach is innovative: it’s not just about creating an open-source product, but an entire open and scalable ecosystem.
As a member of the Indian Digital Economy Board, I’m directly involved in the India Stack, India’s digital public infrastructure, which has had extraordinary success and represents hope for many countries.
Any entrepreneur can use it for free to develop their own applications. Being open source and open to the world, other countries can adapt it to their own needs. What’s important isn’t who had the idea, but using the best available solutions to address urgent issues and make the market work again, blocked by Big Tech’s hyper-monopolistic practices.
The Indian government identified a problem in the digital public infrastructure: two large private companies were using 80% of the resources. It then introduced legislation based on company size: large corporations can use the infrastructure up to a certain point, then they must contribute economically to the system. It’s not about preventing Big Tech from making profits, but ensuring they pay fair value in taxes and contributions.
China is exploring an even more radical idea: a public digital data repository. Even giants like Alibaba must provide metadata to this repository, thus allowing new entrepreneurs to access the same data as billion-dollar multinationals. This approach can revitalize the market and is completely opposite to the American model, based on proprietary data, and the European one, focused on privacy and data protection.
In the Global South, this is seen as one of the few ways to stimulate entrepreneurship. 90% of the world’s young people live in these regions and there’s desperate urgency to create new forms of work and life. Youth unemployment is at a critical level – in Namibia it reaches 60% – and without decent work opportunities, social revolts are risked.
As I highlighted for innovations in the Middle East, in this case too it’s a matter of survival. In the Global South, the urgency on these issues is maximum and that’s why they’re completely reinventing the system.
In Europe instead we don’t have the same pressure, which is why we continue to think in conservative terms of copyright and GDPR adaptations.
The model you’ve described is fascinating: public actors create the basic infrastructure, while those who use and populate it contribute economically. This creates a completely new ecosystem. And this is precisely where the European Union is failing.
Europe has attempted for decades to achieve something similar to the India stack, with enormous resources, but without success. Each country wanted to do things its own way and linguistic and cultural diversity was erroneously seen as an obstacle rather than a strength: data diversity actually determines robustness and superior quality in datasets.
My frustration stems from the realization that Europe has an extraordinary opportunity that it hasn’t known how to seize so far: innovating and building alternative futures for digital and AI.
The United States, in fact, is no longer an adequate benchmark: they’re now seen globally as cyber-bullies. Conversely, China, India and similar contexts suffer from a fundamental deficit: the lack of trust in their government systems, which tend toward paternalism and authoritarianism and show poor attention to data security.
Europe instead has something rare: a relatively high degree of trust in its institutions compared to other parts of the world. Despite the current political scenario, we have functioning democracies and cultural and linguistic richness that allows building incredibly robust and representative datasets.
If we can build our AI systems and our public digital infrastructures, there will be greater probability that they’ll be adopted by many more people precisely thanks to the trust factor. Meanwhile, however, because of the pessimistic conception we discussed, Europe is still adopting its typical approach that contrasts regulation with innovation.

From motivations and public digital infrastructures, what new perspectives open up for entrepreneurs and for building a stronger economy?
European commitment to digital sovereignty is at historic highs. This is an extraordinary opportunity because the collective mentality has changed. An entrepreneur who today proposes solutions to strengthen this sovereignty has much greater support possibilities compared to just two years ago.
In Europe there’s a sense of urgency that crosses all sectors. In the academic world, for example, professors no longer want to use American software. Alternative, workable tools are needed, and the space to create them is enormous.
In the past it was much more difficult for entrepreneurs to enter this market: extremely heavy initial investments were needed. Today, as we’ve seen with AI filmmakers, entry barriers have collapsed. Digital infrastructures are global: an entrepreneur can use open-source public infrastructures from anywhere in the world and create functioning hybrid models.
This means many more ideas can enter the market, finally creating that genuine and healthy competition toward digital sovereignty that we need.
Speaking of Europe, to conclude: what lesson can it learn from the Global South?
The West must recover the concrete optimism of the Global South. Europe must know how to seize this precious moment and become an agent of change. People are ready to trust liberal democracies and to see how we can transform the regulatory approach by transforming it into something functional.
We must think outside the box. Our mindset influences the questions we ask ourselves, which in turn influence what we do in practice in policy design.
As a teacher, if I’m pessimistic I’ll ask myself: how can I limit the use of AI in my classroom? But if I’m optimistic I’ll think: how can we leverage these tools to rethink education for the future?
