There have been recurring waves of high expectations and doomsday fears in the face of the evolution of artificial intelligence since the 1970s, but this year is poised to become the most hysterical. To the amazement, enthusiasm or panic caused by ChatGPT and its fabulous features, it has continued an open letter in which scientists and businessmen call for a digital moratorium. Contemplating this agitation, that Red Flag Act Proclaimed in England in 1865 in order to avoid accidents due to the increase in cars, which imposed a maximum speed of four kilometers per hour in the countryside and six in towns and cities. In addition, each of them had to be preceded by a person on foot with a red flag to warn the population. It took a few years for us to realize that human control of vehicles did not depend on limiting speed to walking parameters.
It is evident that the more sophisticated a technology is, the greater its benefits but also its risks. Human beings explore this partly unknown territory through reflection, which is a way of pausing the processes and anticipating possible problems before they occur. In the context of the current progress of artificial intelligence, certain dangers are becoming present, such as discrimination, loss of control, job insecurity or misinformation, all of them of such magnitude that they seem to make it advisable to stop technological development, everything that is can in order to have a regulatory approach, agree on ethical and political criteria, establish supervisory and certification authorities. The authors of the open letter demand a six-month moratorium for this.
The fundamental problem with a moratorium is that, by pretending to avoid certain risks of artificial intelligence, it accentuates others. Are we so sure that not improving processing models for a while is less risky than continuing to improve them? It is true that the current systems pose many risks, but it is also dangerous to delay the emergence of more intelligent systems, as called for by the moratorium. One of those possible unwanted effects would be the loss of transparency. If such a moratorium were decided, no one could ensure that the work of training such models would not continue covertly. This would pose the danger that its development, which had previously been largely open and transparent, would become more inaccessible and opaque.
On the other hand, something as strict as stopping dynamic and competitive technological sectors raises many doubts as to its viability, both in terms of states and in the private sector. In the current geostrategic configuration of the world, so fragmented, and where the technological race has become one of the main competition scenarios, a binding and mandatory regulation is unimaginable. Nor is there any reason for dominant companies to voluntarily take a brake that could jeopardize their position. It reveals a lot of naiveté to believe that all programmers are going to shut down their computers and that the politicians of the entire world will sit down for six months with the objective of passing binding regulations for all.
In my opinion, there is a lack of understanding about the nature of technology, its articulation with humans and, specifically, the potential of artificial intelligence in relation to human intelligence, which is less threatened than what those who fear the digital supremacy. Of course we find ourselves with an increasingly disturbing gap between the speed of technology and the slowness of its regulation. Political debates or legislation are mostly reactive. A moratorium would have the advantage that the regulatory framework could be proactively adopted before the investigation progresses further. But things don’t work that way, even less with this type of sophisticated technology. The moratorium petition describes a fictitious world because, on the one hand, it considers the victory of artificial intelligence over human intelligence possible, and on the other suggests that artificial intelligence would only need some technical updates during a six-month development freeze. They were? How is it that the threat is so serious and that, at the same time, a six-month moratorium is enough to neutralize it?
If we go from fictional politics to real politics we find a very different scenario. The European Union is the political area in which all this is being regulated more efficiently and quickly. Well then, the Artificial Intelligence Act proposal of the European Commission It has been on the table for almost two years and the details have been discussed ever since. Even if the Law could be approved this year, it will probably take another two years before it is applied in the EU States. More than proof of irresponsibility or unjustified slowness, it is a confirmation of the complexity of the matter, that it is not possible to speed up the regulatory processes and stop technological development, when many actors have to agree, including the technological sectors themselves. which is intended to be regulated.
ChatGPT has surprised everyone, generating fascination and panic in equal measure, by verifying to what extent a technology could simulate human capabilities. Beyond this first impression, it is easy to understand that it is something less extraordinary than it seems, since in history most of the techniques were developed to improve, complement and even replace certain human activities. It is no civilizational break to invent technologies that do certain things better than we do, any more than defeating humans at chess or go was a catastrophe. It is important to remember that, historically, new technologies have always caused phases of social uncertainty, but they are only temporary.
The letter is an exercise in alarmism about the hypothetical risks of a substitute human intelligence. It suggests completely exaggerated capabilities of systems and presents them as more powerful tools than they really are. In this way, it helps to distract attention from the really existing problems, which we have to reflect on now and not in a hypothetical future.
The main contribution of calling for a moratorium is to make broader segments of the population aware that there are indeed significant issues at stake. The most valuable thing about this petition for a moratorium is its performative message, namely, drawing attention to the importance of what science, technology, the economy, politics, educational institutions and the general public have at hand, and the request that the necessary alliances be forged.
The problem is not that artificial intelligence is too smart now or in the future, but that it will be too little until we have worked out its balanced and fair integration into the human world and the natural environment. And that will not be achieved by stopping anything, but with more reflection, research, collective intelligence, democratic debate, ethical supervision and regulation.