All institutions want to regulate ChatGPT. The famous conversational robot based on generative artificial intelligence (AI) begins to raise suspicions around the world. There are fears about its effects on the privacy of users, its potential to spread misinformation and the possible destruction of jobs that it can cause if it is implemented in the work environment and performs worker functions.
Two weeks ago in a joint letter, more than a thousand specialists demanded a six-month moratorium on the development of these programs. And governments have answered that call for help. The United States opened a two-month period on Tuesday to gather ideas on how to regulate this type of tool. China has already introduced regulations to regulate generative AI, according to advanced Reuters: Companies that want to use this technology must prove to Beijing that they meet a series of security requirements.
What happens in Europe? In the Old Continent, the situation is more complex. The matter advances at a different pace and on several sides at the same time. These are some of the keys to the debate that is taking place in the EU these days:
How does ChatGPT work?
ChatGPT4 is the latest version of a large language model (LLM). That is the name given to artificial intelligence systems, or more specifically deep learning, trained with huge amounts of data (in this case, texts) to be able to maintain dialogues with the user. The program processes millions of texts (in the case of ChatGPT4, the entire internet) and applies a series of algorithms to try to predict which word is most likely to follow the previous ones in a coherent sentence. For example, if you type “the sky is colored”, the system has been trained by reading enough text to be able to say “blue”.
The increase in computational power in recent years, the sophistication of the algorithms in charge of carrying out this training and the huge databases used in the process have made the operation go far beyond predicting a word. ChatGPT, like other similar models, is capable of writing coherent entire texts without spelling mistakes. The algorithm takes 175 billion parameters into account each time a question is asked. The result can be amazing.
What threat does it pose to users?
The problem with the answers provided by ChatGPT is that they are coherent, but not necessarily true. This is how the program itself warns as it starts: “It can occasionally generate incorrect information”, as well as “harmful instructions or biased content”. It is also warned that the system “has limited knowledge of the world after 2021”, the date up to which the database with which the model was trained covers.
In addition to the quality of the information (or misinformation) generated by the tool, there is a fear that the increasing sophistication of the model (ChatGPT4 has just been presented and version 5 is already being worked on) could end up doing the job done so far. For people.
What has been done so far in the EU?
The Spanish Data Protection Agency (AEPD) and its European counterparts met this Thursday at the European Data Protection Committee (EDPB), the body in which they coordinate forces. They decided to set up a “working group” to exchange information on the matter. The community institutions have not adopted any specific measures regarding ChatGPT.
Those who have done so are some of the member countries. The last of them, Spain: the AEPD announced on Thursday afternoon the “ex officio start of preliminary investigation actions” to OpenAI, the company that has developed ChatGPT. According to agency sources, this does not imply that measures will be taken against the company, but that it is exploring whether the situation “justifies the processing of a procedure.” Italy, for its part, blocked the application two weeks ago until it was determined whether it violated European data protection regulations. The French, Irish and German authorities are also investigating that possibility.
On the other hand, the Internal Market commissioner, Thierry Breton, announced last week that the content created by AI must carry a specific warning of its origin. “In everything that is generated by artificial intelligence, whether text or images, there will be an obligation to notify that it has been originated by them,” he said.
Why do you want to regulate?
Until now, the most immediate threat to the interests of European citizens has been detected in terms of privacy. ChatGPT itself warns its users not to enter “personal data” into the system. But that is not enough.
Asked about the reasons that have led him to initiate “prior investigative actions”, AEPD sources say that they cannot reveal details so as not to harm the processing of the file. According to experts consulted, the possible infractions of the regulations could have to do with the use of the conversations that ChatGPT users have with the tool itself to train them, as well as with a custody of their personal and payment data that does not comply with all the security guarantees required in the EU.
How do you want to regulate?
In Brussels there is a debate behind closed doors about what is the best way to deal with the situation. The majority opinion is that the EU is already (or will be) endowed with sufficient regulations to control the possible adverse effects of generative AI.
The spearhead of this strategy is the European AI Regulation (AI Act), a document that has been negotiated since 2019 and whose final text has yet to be approved by all the European institutions. It is not expected that it can enter into force before the end of next year or even 2025. The text classifies the technologies according to the risk they offer to citizens and assigns limitations accordingly. The most innocuous ones can operate without problems, but the high risk ones are prohibited. They fall into this last chapter, for example, the use of automatic facial recognition systems in public spaces or social credit scoring systems, something that already works in China.
During the French rotating presidency of the EU a recommendation was made that generative AI be included in the high risk category. “It seems reasonable to me, but there is a lot of pressure from technology companies so that it is not included there,” a MEP who has participated in the negotiation and drafting of the AI regulation told EL PAÍS.
Another MEP also involved in the process, Romanian Dragos Tudorache, told a conference this week that the EU’s response to the ChatGPT challenges should be guided by the AI Act regulation, and not through the General Data Protection Regulation. (GDPR), the regulations that Italy has adhered to to prohibit the tool. “I am convinced that we need a unified response,” he stressed.
Is a new regulation needed?
Commission sources consider that the future AI regulation “is designed to test future challenges.” By regulating the uses, not the technologies themselves, the regulation can adapt to challenges such as the sudden explosion of the phenomenon of generative AI, of which we have been fully aware since the open launch of ChatGPT in November last year.
Under this approach, the technology behind ChatGPT would not be banned unless it was deemed high risk. “But if someone uses it, for example, to process health data, then the regulations would apply,” says Jan Philipp Albrecht, president of the German Heinrich Böll Foundation, linked to Los Verdes. “That is the problem that appears now: if we want to regulate generative AI more strongly, a new category should be introduced that says that general-purpose AI should be considered high risk if it can intercede in risky fields,” he adds. That is what technology companies want to avoid at all costs; for that they organize meetings with the European legislators.