James Manyika (57 years old, Harare, Zimbabwe) says that artificial intelligence (AI) has been around us for decades, it’s just that people haven’t noticed it yet. He was already working in this field long before he became Google’s vice president of Research, Technology and Society, a position he has held for a year. In that time, artificial intelligence has jumped from the apocalyptic scenes of science fiction movies to the forefront of world news. “25 years ago, when I did my PhD in robotics, no one understood what we were talking about. People still do not realize that long before the arrival of the chatbotthey were already benefiting from artificial intelligence ”, he explained to EL PAÍS in Madrid, where he participated last Thursday in a Google event at the Lázaro Galdiano Museum.
Manyika insists that this revolution had been brewing above all over the last 15 years, although she acknowledges that it has been in recent months when the irruption of artificial intelligence has accelerated, after the launch of ChatGPT, the chatbot OpenAI Generative AI. Since last February, Google has also had its own application in this category, Bard, which still cannot be used in Spain. “It will arrive soon,” Manyika guarantees. “There is a lot of work to do because Spanish is a complex language, with many variants. We want to do it well because it is a very important issue.”
Ask. Is it possible that they are overestimating the importance of artificial intelligence?
Answer. No, I do not think so. We are giving importance to it because it is such a profound change that it will affect almost everything we do. The economy, productivity, how we conceive information and learning. For me, the question is how to get both sides of the coin: make sure that it is useful for society and, at the same time, be able to deal with the challenges that come our way.
Q. Will the world change?
R. I think so. What I find so critical about AI is that it will be like computers or electricity. It is an essential technology: I cannot imagine an activity or a part of society in which it is not going to be useful. In that sense, I think it will change the world. At the same time, I think it is so powerful and so useful that it will also bring very significant consequences, risks and challenges, which we have to deal with.
More than two thirds of the jobs will be different. They will not disappear, they will simply evolve and change.
Q. What do you mean when you talk about risks?
R. On the one hand, there are the risks that occur when the technology itself does not work as we want it to, when it turns out to be inaccurate or wrong. Other types of risks are those that have to do with privacy and the handling of information. And furthermore, even when those two aspects work well, it is possible to misuse this technology. It can be used for criminal purposes, for disinformation, or to create threats to national security. There is also a fourth complication, which has to do with side effects, such as the impact that AI can have on jobs, mental health, and other socioeconomic factors that we should pay attention to.
Q. In fact, there are already people losing their jobs because of AI…
R. There are jobs where machines can perform some tasks that people now do; and in them there will be losses, it is true. Jobs will also be created, both due to increased productivity and the creation of new categories. But I think the biggest effect, and this is what all the analysis now seems to indicate, is that jobs are bound to change. Think of bank tellers, who in the 1970s spent 90% of their time counting money, while now they spend less than 10% of their time on this task. Our data suggests that more than two-thirds of jobs will be different. They will not disappear, they will simply evolve and change.
Q. Should we be afraid of AI?
R. No, but we should be careful how we use it. Artificial intelligence is not something of the last few months, we have been living with it for years. If you look back on its history you will realize that as soon as any of its applications became useful, we stopped calling it AI; but we keep the term for the things that are to come or the things that scare us. I’m not saying we shouldn’t worry. But we should also remember all the ways in which we already use it and it is very useful to us.
Q. Geoffrey Hinton left Google precisely to warn about the risks of this technology.
R. I know Jeff well. I think what he was trying to do, and what many of us have been trying to do, is to highlight that we should take a preventative approach. Because yes, the benefits are incredibly useful, but there are also concerns to be aware of. I think he wanted to remind us of all the risks that he carries, especially as he gets more advanced. And I think that approach is appropriate.
Q. Why are there so many doomsday manifestos signed by the fathers of AI?
R. I myself signed one of these letters, because I consider it essential to ensure that due attention is being paid. Whenever we have a powerful technology, we have to think about both its benefits and the real risks. At Google we want to be bold and responsible. I know those two things sound contradictory but they both matter.
Q. Is regulating AI a way to be responsible?
R. Yes. These technologies are too important not to be regulated. We have been saying it publicly for a long time. Any powerful technology that is this groundbreaking and complex needs it, even if it is as useful as this. If it is affecting people’s lives and society, there has to be some form of regulation.
Q. There are those who ask to pause its development until it is regulated.
R. We would be pausing the benefits of this technology for people. Do we really want to stop sending flood alerts to the millions of people who receive them today? Stop working on advances in medicine? I don’t think so. There would have to be a clear plan for what we would be doing during that hiatus and everyone working on the development of the AI would have to be coordinated. What I think is important is to make sure that we are in conversation with the governments, to find out what we want to do and how we want to do it.
Q. Is there a sector where it is dangerous to apply AI?
R. I do not think so much in specific sectors, but in the use that is given to it. A technology applied in medicine is different from the same technology applied in the transport sector. The risks are different. I agree that it is necessary to reflect on how this technology is applied in each case. For example, as much as I love what we’re doing with Bard, I think it’s a terrible idea to ask him for legal advice. Now if you ask me if Bard should be used to write an essay and explore ideas, my answer is of course.
Q. Is it a good idea to ask him for help if we are sick?
R. I would not get a medical diagnosis from a chatbot. In general, if I wanted to get factual information, I would go to Google search. If I want to know what happened in Madrid this morning, I wouldn’t use Bard for that either.
Q. Do you think the chatbot of AI (such as Bard or ChatGPT) can replace search engines?
R. I don’t know what other companies are doing, but I can tell you what we are doing. For us, Bard is not the same as Google Search (the company’s classic search engine). Yes, there are ways that we are bringing AI and large language models into Search, but they are two very different cases. We launched Bard as an experiment: we’re trying to understand why people are using it, what it’s useful for. And we are still learning. It’s important to note that we’ve been using artificial intelligence to improve search for much longer than people realize. Six years ago, when you tried to use the search engine, you probably had to write a fairly precise query for it to return something useful. Today you no longer have to. Writing something that is more or less correct is enough.
Q. What do you think the scenario will be like in 10 years?
R. I think it will be amazing. I think of all the things that can benefit society, for example the possibility of understanding thousands of languages, and it excites me. Right now we have set ourselves the goal of translating 2,000 languages on Google, but in ten years I think we can reach all 7,000 languages spoken in the world, even languages that are disappearing. It would be extraordinary. But at the same time, I hope that we have also made incredible progress in combating all the risks that we have talked about.
Part of our fear of AI comes from an inability to accept that machines can do creative things, too.
Q. What would need to happen for the AI to get out of our control?
R. That we somehow manage to develop systems that design themselves and are capable of creating their own goals. That would be problematic, but we’re light years away from that. That would be the science fiction version. A more likely and problematic situation would not have so much to do with artificial intelligence running amok, but with the people themselves. The danger is that humans put these technologies to horrible uses. We know that the same system that can decipher protein structures to develop drugs could also design toxins or viruses if it falls into the wrong hands. This is what really worries me in the short term.
Q. Where does the fear of artificial intelligence come from?
R. (Laughs) From Hollywood movies. I’m joking, but I also think it’s true. I go back to what I said before, to the idea that when this technology starts to be useful, we just stop calling it AI. It seems we reserve this label for things we see in movies or things we don’t understand yet or are about to happen. On the other hand, I think that part of that fear can be traced back to a very human factor, which is a question that humanity has always asked itself. What does it mean to be human when machines manage to do the things that until now differentiated us from any other living being? Until now we thought that we were the only ones capable of making art, the only ones with creativity and empathy. I think part of that fear comes from an inability to accept that machines can also do creative things, which until now were considered exclusive to humans.
Q. Could we say that what scares us is that machines can do something better than us?
R. We have to face that fear. We have to adjust our thinking and ask ourselves who we are and what we are good at. There was a time when we used to assume that only people who could do math in their head were smart. That if you couldn’t recite from memory things you’d learned in a textbook on a test, you probably weren’t very smart. We used to think all this, but now we’ve moved on, and I think the same will happen with AI. It’s just that maybe it’s happening faster than we humans are prepared to take in. But I believe that humanity has always adapted and will continue to do so.
You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits