Gabriela Ramos arrived at Unesco, the United Nations agency for Education, Science and Culture, in 2020 with a mission: she had to carry out a kind of Universal Declaration of Artificial Intelligence (AI). The document, which was eventually called Recommendation on Ethics in Artificial Intelligence, was presented in 2021 and has been signed by 193 countries, although only 24 are implementing it. Of a non-binding nature, it offers guidelines for action on issues such as data governance, mass espionage technologies, the abuse of cognitive biases, the control of neurotechnology. It has, among others, the approval of the European Commission or Japan and companies such as Microsoft or Telefónica.
The Unesco initiative ran the risk of falling into irrelevance. But then ChatGPT came along. The fears aroused by this tool led to the publication three weeks ago of a letter signed by thousands of AI experts calling for a moratorium on the development of this technology. That wake-up call from some of the parents of the discipline has renewed interest in the work of the UN agency. “There has been an exponential growth in consultations from countries that want to meet with us. We have advanced talks with 18 countries. Where requested, we develop a measure of the ethical impact of AI. We help to make diagnoses, to evaluate the government teams and to think about what type of institution should supervise the development of these regulations”, explains Ramos, born in Michoacán 59 years ago, by videoconference.
The Mexican has an extensive career as an international civil servant, developed mainly in the OECD and the G20. Since 2020 she is the deputy director general of Unesco. “All the uncertainty surrounding ChatGPT regarding its impact and its development is helping us to raise awareness about a key issue. That’s the only silver lining to all of this.”
Ask. What do you think of ChatGPT and the boom in generative artificial intelligence?
Answer. ChatGPT confirms what we have been saying: that there is an exponential growth of these technologies. Before, we were very concerned with understanding that machine learning algorithms had robustness in their definitions and making sure that the data they used was of quality. The large language models that chatbots rely on make it even more difficult to understand how they work. And I think this is the fundamental issue. It is a pity, because we are facing spectacular technology. But it suffers from the same problems as less massive AI: When they hit the market, they aren’t always secure, trustworthy, or transparent. All of these developments are taking place in a general regulatory vacuum. Europe is advancing its directives. President Biden himself has now called a consultation to see if these developments have to be certified before they hit the market. China has established regulations for those who want to release products based on this technology.
Q. Is it necessary to regulate the AI?
R. We need a framework that allows us to measure ex ante. An ethical impact assessment is needed on freedoms, on rights, on inclusive outcomes, and all of that has to happen before the product is on the market. There must be certain procedures that allow us to ensure that these developments are fully tested and that we at least understand what their impact may be. But we continue in the world upside down: first you let them go and then you wonder what their consequences are. It seems ridiculous to me to have to be saying that we need regulations. All markets are regulated. Imagine if pharmaceutical companies could market any medicine without any kind of checkup. Or if you could open a restaurant and serve whatever quality food you want. The issue is not whether or not there will be regulation, but what kind.
Q. Thousands of experts signed a letter three weeks ago calling for a moratorium on generative AI research. Do you agree with her?
R. What that letter confirms to us is that we do not feel capable of handling these systems. I think the letter makes sense. Everyone has put the emphasis on the pause, but what is also being asked is that there be no more developments before we have solid regulatory frameworks. Unesco has been working on this for the past two years, since the 193 member countries approved the Recommendation on Ethics in Artificial Intelligence. The question that fits here is whether governments have the powers, institutions and laws to moderate and govern AI. The letter from the experts has meant that many people are getting more information now on the subject. That these gentlemen, who are the ones who have developed this technology, say that a pause is needed means that they themselves do not trust that they can handle it. I do not believe that a moratorium is a realistic option. What we have is to speed up the regulations. And there I do agree: we need governance mechanisms for artificial intelligence.
We want to establish a roadmap on how to understand AI, how to approach these developments, how to prevent negative impacts, how to define them, and how to advance regulations and institutions.”
Q. What does the Unesco Recommendation on Ethics in AI propose?
R. We say that technologies have to underpin human rights, they have to contribute to the climate transition and deliver fair and robust results. They must also be transparent and there must be accountability. 60% of these technologies are developed by American actors, and another 20% by Chinese companies. This concentration then derives in a lack of diversity, in discriminatory results, with biases. This whole business model has to change.
Q. Is Unesco’s approach for AI to be regulated by each state or for it to be handled by some supranational body?
R. Our Recommendation is not binding, but it has been signed by 193 countries. In the end, it is governments who have to define their regulatory frameworks. What we are doing now at Unesco, based on the definition we already have of standards and best practices, is to think about which institutions and regulations help countries to converge. The United States, which is considering a return to UNESCO, has also said that our debate about what kind of international rules should govern AI is important. When someone sees their fundamental rights assaulted, when someone is discriminated against and they are not shown a job offer because the AI did not have it in their databases, when facial recognition technology does not detect you because you are a person of color or a woman So no matter how many multilateral agreements there are, governments have a responsibility to act.
Q. Is it realistic to try to push for international regimes to regulate a technology like AI?
R. Millions of decisions are being made with the support of artificial intelligence without any transparency. If you are discriminated against, you don’t even know if it was because of a person or an algorithm. It is up to us to provide context, and then the countries will move forward in their own decision-making. In my 20 years of experience in multilateral organizations, I have learned that progress can be made with concrete evidence, showing what the forecasts are for certain developments and pointing out how countries that have good regulations are not left behind in technological competition.
Q. The Cold War nuclear non-proliferation treaties made sense because the United States and the Soviet Union were involved. What would happen in the case of AI if some key player was left out?
R. When I arrived at Unesco three years ago, many people told me: what is the use of working in an ethical framework for AI if the US, which is the main developer, is not a member of Unesco? The recommendation was signed by 193 countries, including China. The US is going to take note because what we are doing is not imposing a single model, but raising awareness. We want to establish a roadmap on how to understand AI, how to approach these developments, how to prevent negative impacts, how to define them, and how to advance regulations and institutions.
Q. In the development of AI, the geopolitical plane is important.
R. Yes, we are in the middle of a technological race. It is now being decided what kind of technology is going to be adopted. All countries are acquiring AI packages to manage education, health or security. How do you make sure they understand what they are buying? Those who are producers of these technologies and have an interest in having a greater number of users of their technology are taking note of what is happening in terms of regulation. China is part of the UNESCO consensus. Will you comply with the agreement? Well, they signed it, didn’t they?
Q. ChatGPT, which is barely six months old, has placed generative AI among the great current topics. How much time do we have to develop proper governance mechanisms to regulate this technology?
R. We are already on it. The European Union has already gone far enough with its directives, with its risk-based approach. It is a different approach to that of Unesco, but very complementary to analyze what type of developments carry the greatest risk. I would say that if the European Union directives were already fully in force, ChatGPT would not have entered the market. Because? Because it would have the characteristics of developments that involve great risks and that require special attention from the regulator. What has happened with ChatGPT has made us give this sense of urgency to what we were already doing. The Unesco recommendation was adopted between 2020 and 2021. The EU directives, between 2020 and 2022. We are doing well.