It is not the first time that humanity has faced a technological development with unforeseeable consequences for its own existence. The writer Isaac Asimov already raised in Vicious circle, a story published in 1942, three rules to protect people from robots and its base is still used as a reference. The International Atomic Energy Agency was created in 1957 “in response to the deep fears and expectations inspired by the discoveries and various uses of nuclear technology.” according to the organization itself. International Humanitarian Law (known as the law of war) has spent years seeking an effective regulation of Lethal Autonomous Weapons Systems, which can attack without human intervention. Europe has now started the processing of the first regulations in the world on artificial intelligence (AI), a technological development capable of accelerating progress in fundamental fields such as health or energy, but also threatening democracies, increasing discrimination or breaking all the limits of privacy. “Sowing unfounded panic does not help, on the contrary. Artificial intelligence will continue to work and we must improve it and prevent it ”, he defends Cecilia Danesidisseminator and lawyer specializing in AI and digital rights, professor at several international universities and author of The empire of algorithms (just published by Galerna).
The first thing to understand is what an algorithm is, the basis of artificial intelligence. Danesi, a researcher at the Institute for European Studies and Human Rights, describes it in her work, a fundamental compendium for understanding the scenario facing humanity, as a “methodical set of steps that can be used to make calculations, solve problems, and reach decisions.” In this way, the algorithm is not the calculation, but the method. And this is the one that can include the precise model to identify a cancer in images, discover a new molecule with pharmacological uses, make an industrial process more efficient, develop a new treatment or, on the contrary, generate discrimination, false information, a humiliating image or an unfair situation.
OpenAI director Sam Altman, Turing laureate Geoff Hinton, AI researcher Yoshua Bengio, and Elon Musk, among others, have called for regulation and urgent action to address the “existential risks” AI poses to humanity. These include the increase and amplification of misinformation (such as the preponderance of false and malicious content on social platforms), the biases that reinforce inequalities (such as the Chinese social credit system or the mechanical consideration of people as potential risks due to their ethnicity) or the breaking of all privacy limits to collect the data that feeds the algorithm and that remains hidden.
The European Union has begun to negotiate what, if the deadlines are met, is called to be the first AI law in the world. It could be approved during the Spanish presidency of the EU and its objective is to prevent uses considered as “unacceptable risks” (indiscriminate facial recognition or manipulation of people’s behavior), regulate its use in sectors such as health and education, as well as sanction and prevent the sale of systems that do not comply with the legislation.
UNESCO has developed a voluntary ethical framework, but this very character is his main weakness. China and Russia, two countries that use this technology for mass surveillance of the population, have signed these principles.
“There are fundamental rights involved and it is an issue that we have to occupy and worry about, yes, but with balance”, defends Danesi. It is a similar criterion to the one exposed by Juhan Lepassaar, executive director of the European Cybersecurity Agency (Enisa for its acronym in English): “If we want to secure AI systems and also ensure privacy, we need to look at how these systems work. ENISA is studying the technical complexity of AI to better mitigate cybersecurity risks. We also need to find the right balance between security and system performance.”
One of the risks exposed so far has been the replacement of people by AI-operated machines. In this sense, the researcher Cecilia Danesi affirms: “The machines are going to replace us and they are already doing so. There are many that replace us, enhance the work or complement us. The issue is what and where we want to be replaced and what requirements these machines have to meet to make certain decisions. First we have to identify a problem or a need that justifies using it or not”.
In the field of robotics, Asimov already anticipated this problem and established three principles: 1) A robot will not harm a human being or allow them to suffer harm through inaction; 2) A robot will obey the orders it receives from a human being, unless the orders conflict with the first law; and 3) A robot will protect its own existence to the extent that such protection does not conflict with the first and second laws.
Permanent and preventive supervision
“It looks great. Done: artificial intelligence can never harm a human. Divine. The problem is that in practice it is not so clear”, explains Danesi. The researcher recalls “a case in which two machines were programmed to optimize a negotiation and the system understood that the best way was to create another, more efficient language. Those who had designed the program could not understand that language and disconnected them. The system was handled within parameters, but artificial intelligence can go beyond what is imagined. In this case, the machine did not harm its programmers, but excluded them from the solution and its consequences.
The key, for Danesi, is “permanent supervision, algorithmic audits of these high-risk systems, which can significantly affect human rights or security issues. They have to be evaluated and reviewed to verify that they do not violate rights, that they do not have biases. And it must be done on an ongoing basis because systems, as they continue to learn, can acquire bias. And preventive action must be taken to avoid damage and create systems that are ethical and respectful of human rights”.
Another of the great dangers of the uncontrolled use of AI is its use for military purposes. The proposed EU regulation excludes this aspect in its first wording. “It is one of the most dangerous uses for artificial intelligence. Many times the laws prohibit something that, later, in practice, continues to work and is where it can do the most harm to people”, laments the researcher.
“Should we fear machines? The answer is No! We must, where appropriate, fear people for the use they may make of technology”, defends Danesi in his work The empire of algorithms.
Respect for citizen data
Manuel R. Torres, professor of Political Science at the Pablo de Olavide University and member of the advisory board of the Elcano Royal Institute, spoke in similar terms. “The problem is the proliferation of a technology that must be prevented from reaching the wrong hand. It is not simply knowledge that has been released into the world and that anyone can make use of.
Torres adds a problem to the technological scenario and to the proposal for a European regulation, which he defends as a regulatory power: “The conflict is in how this technology is developed in other areas that do not have any type of scruple or limitation regarding respect for the privacy of the citizens who feed all this with their data”.
The political scientist gives the case of China as an example: “Not only is it within that technology career, but it has no problem in massively using the data left by its own citizens to feed and improve those systems. As scrupulous as we want to be with the limits we put on our local developers, in the end, if this doesn’t happen globally, it’s also dangerous.”
Torres concludes: “We find ourselves in a territory where there are few references on which we can rely to know how we have to address the problem and where, in addition, there is a problem of understanding the repercussions of this technology. Many of our legislators are not exactly familiar with these developments.”
Subscribe to continue reading
Read without limits