We are entering a new phase of artificial intelligence (AI) development. There are a handful of companies that are creating the code that will mark future generations. And they are doing it without the participation of society in key decisions, without anyone supervising whether the AI that is currently being cooked up for mainly commercial purposes is also the one that we want to order our lives in the coming decades. Amy Webb (East Chicago, Indiana, 1974) has taken this key moment as a starting point to describe three possible future scenarios for this century. In fact, that is what Webb does: she is the founder and CEO of Future Today Institute, a company that researches, models and prototypes the risks and opportunities that lie ahead. Its clients include some of the world’s largest corporations, but also central banks and governments, including the US Department of Defense.
From his point of view, artificial general intelligence (IAG, the one that will definitely surpass human intelligence) is already taking its first steps. AlphaGo Zero proved that a machine is capable of surpassing the human mind, devising winning strategies for a game as complete as Go that the greatest masters in the world are not even capable of understanding. “People often think that the IAG will resemble the replicants of Blade Runner or Jarvis from the Marvel movies. It is true that AlphaGo cannot drive a car or invent new recipes, but I would say that it is an early example”, he maintains.
in his book The nine giants. How Big Tech Threatens the Future of Humanity (Peninsula), the adviser warns that if we do not take control of the situation, what we will see in the coming years will surely not be to our liking. The nine giants will take care of that: the GMAFIA (Google, Microsoft, Amazon, Facebook, IBM and Apple) and the BATs (Baidu, Alibaba and Tencent). “I chose these because they are the ones that make the most transversal use of AI”, justifies the author.
ASK. Do we have reason to fear AI?
ANSWER. I think so. People often equate the threat of AI with the rise of some kind of Terminator. The future ahead is much more nuanced. You don’t have to die from a shot, you can also die from millions of paper cuts. Some of the problems we face today seem very minor: being shown news that is different from what someone else sees, different recommendations, different prices depending on what each one can afford… They are not catastrophic things, but they do not stop erode our decision-making capacity. In parallel, AI continues to gain ground. In the US, there are those who advocate that air traffic regulation be carried out by automatic systems. In fact, the AI does most of the work, although the pilots are in command of the critical operations: takeoff and landing. AI may be better than us, but how do we make sure the data it operates on is right?
Q. He argues that while China has a very clear strategy to become the world leader in AI, companies in the West have been allowed to develop this technology as they see fit.
R. China has never hidden its ambitions: it wants to master AI, but also synthetic biology, gene editing… They have told everyone their plan and they are executing it step by step, the problem is that for a long time nobody took them oh really. In the US, the government has looked the other way. At some point people worried that Facebook was doing bad things and that Amazon and Google are becoming too big companies, and now is when they are trying to legislate. Regulation has been thought of in the EU from the beginning, but the technology it regulates is not manufactured in Europe. So we have the Wild West in the US, where companies do what they want; a planned system in China commanded by the government and an EU that is trying to set the standard for how things should be. The problem is that these three actors are not motivated to collaborate in any way. We are going towards separate AI networks, as has already happened with the internet.
Q. Big tech is developing AI that matches their business interests, but not necessarily what people need. How can you find a better balance?
R.. Indeed, they are companies and therefore they have to earn money, but they cannot break the law. And in fact they are not doing it, even though we think that what they do is not in our best long-term interests. The big problem is that most people don’t care. People want good product experiences and not make decisions. And that creates a paradox: on the one hand, the majority surely thinks that it is necessary to cut up the bigtechbut at the same time no one would be willing to voluntarily stop using the services they offer.
Q. You propose that AI become a public good.
R. That already happens with energy or telecommunications. In both sectors there are companies that make money, the difference is that there is a stronger alignment between governments and companies. If we can conceptually define AI as a public good then we can incentivize more public-private collaboration. It’s going to be complicated, and the longer we wait, the worse, because the ecosystem will be more developed.
Q. It also makes it clear that China can be a threat.
R. China has invested heavily in the developing world. But not just money: they have diplomats on the ground, they are building strong relationships. I think there are going to be big geopolitical changes in the next two years. And I wouldn’t be surprised if China builds a geopolitical bloc supported by technology, a field it is likely to lead. I think that is a problem for Europe and the US.
Q. In his book he describes three possible scenarios depending on how we manage AI. How did you develop them?
R. I am a futurist. I know that sounds weird, but we have our methods. Planning and study is a capital part of our work. There are many ways to describe future scenarios, which always incorporate a great deal of research and model building. In this book, I challenged myself to see what would happen if we made the right decisions, made terrible decisions, or continued business as usual. I was interested in building three scenarios, optimistic, pessimistic, and pragmatic, because I want people to see that we have alternatives. And I hope it helps us to make good decisions.
Q. How is the optimistic scenario?
R. It does not mean that we live in an idyllic world, but that we face difficult decisions in the best possible way with the information we have. In this scenario, a system of economic incentives is applied as a method to unite countries and personal data is processed in an appropriate way to face the challenges we have today. And technology is also beginning to be developed that will serve the centennials (those born between 1990 and 2009). For this scenario to work, state-sponsored cyberterrorism must also be identified. I am aware that the latter is very aspirational, but it would make sense.
Q. And the pragmatist?
R. The pragmatic scenario draws what would happen in a few years if we continue as before. We will have greater business consolidation and most countries, including Spain, will join the Google or Applezon family, the merger of Apple and Amazon. That means one of these companies will control your data and provide the operating system for your daily life. Microsoft and IBM continue to exist, albeit in the background, and Facebook disappears, because I really don’t see long-term continuity in their business model. This future explores what would happen if our lives were no longer interoperable. At the same time, China continues to concentrate power while the US fights the bigtech in the courts and the EU regulates the use of technology. At the end of this scenario, China creates its own geopolitical block based on the economic and diplomatic relations that it has been cultivating for so long and on the application of its AI in its areas of influence, which become dependent on its technology. China creates One China and uses technology to lock other countries into its bloc: it makes it difficult for companies to operate beyond their own borders, it makes it difficult for people to travel abroad… And this scenario culminates in a new type of technological warfare in which dominates China.
Q. How is the pessimist?
R. In the worst-case scenario, China no longer needs the US as a trading partner or a source of intellectual property. It exports its social credit system to more than 150 countries, and in exchange for obedience, these countries have access to its technological network, trade, and a stable financial system backed by Beijing. Its citizens are free to move around all the countries in the Chinese orbit as long as their social credit is sufficient. The US and Europe are surrounded. Over time, China develops a super artificial intelligence that decides that, on an overcrowded planet with a lack of food, the population outside the bloc must be annihilated in order to survive.
Q. Which of the three scenarios seems most plausible to you?
R. I think right now we are moving decidedly towards the pragmatic. Many believe that this scenario seems very dystopian. Well, if it makes you uncomfortable, I’ve got bad news: we’re headed that way. The catastrophic does not seem very distant either. We still have time to rectify, but the pandemic has divided us even more in many ways. I hope that as the world begins to emerge from this crisis we remember that we have a lot of work to do and many difficult decisions to make to make AI serve the public interest.