The race to make money with artificial intelligence — unleashed by big technology — is now being followed by another to regulate these tools. Above all, due to the opacity around the consequences of its use and the origin of its data. Two weeks ago, Italy decided to block the use of ChatGPT for breaching data protection regulations and lacking filters for minors. Today, the governments of the two superpowers, the US and China, announce steps towards the regulation of these programs based on artificial intelligence. The Joe Biden Administration has provided a 60-day period to gather ideas on how to legislate against the unwanted effects of these programs, which pose a risk in fields as diverse as privacy, misinformation, or the labor market. For its part, Beijing has made public a regulatory proposal that will require security and legitimacy from providers in these applications.
The US Department of Commerce has registered a formal request for proposals on accountability measures, advances he Wall Street Journal, which includes whether potentially dangerous new AI models should go through a certification process before launch. “It is surprising to see what these tools can do even in their initial stage,” explains Alan Davidson, director of the National Telecommunications and Information Administration, in the American newspaper. “We know that we need to put some protection measures in place to make sure they are used responsibly,” adds Davidson, who leads the initiative.
The Cyberspace Administration of China also unveiled on Tuesday draft measures to regulate generative artificial intelligence services and said it wants companies to submit security assessments to authorities before launching their products to the public, according to collects Reuters. The rules drafted by this regulator indicate that providers will be responsible for the legitimacy of the data used to train their generative artificial intelligence products and that measures must be taken to avoid discrimination when designing algorithms and training that data.
In addition, this body ensures that China supports innovation in the field of tools, but that the content generated must be in line with the country’s core socialist values. This announcement comes after a slew of Chinese tech giants, including Baidu, SenseTime, and Alibaba, showcased their new apps, which range from chatbots to image generators. They thus join companies such as Microsoft and Google, which already want to integrate these tools into their services.
Doubts in Europe
These announcements come as several European governments are considering how to mitigate the dangers of this emerging technology, which has exploded in popularity among consumers in recent months after the launch of ChatGPT, from the OpenAI company, initially supported by Elon Musk and now promoted by $10 billion from Microsoft. Brussels wants content generated by artificial intelligence to carry a specific warning, as announced by the European Commissioner for the Internal Market, Thierry Breton: “In everything that is generated by artificial intelligence, whether it is text – everyone now knows ChatGPT – or images , there will be an obligation to notify that it has been created by artificial intelligence.
Following the blockade announced in Italy by the Guarantor for the Protection of Personal Data, France, Ireland and Germany They recognized contacts to analyze if they would follow in their footsteps. Privacy regulators in France and Ireland contacted their counterparts in Italy for more information on the reasons for the ban, and the German data protection commissioner assured the newspaper handelsblatt which could follow the Italian steps and block ChatGPT because of the data security risk.
Now, the National Commission for Informatics and Liberties (France’s privacy watchdog) has made public that it is investigating various complaints about ChatGPT. Meanwhile, the Spanish Data Protection Agency, as well as its Italian counterpart, have requested that the possible regulation of generative artificial intelligence systems be discussed at the meeting this Thursday of the European Data Protection Committee, the body in which They coordinate the branch agencies of the member countries.
The controversy over the dangerous capacities of these tools goes beyond the legislative field, as was shown a few weeks ago by more than a thousand specialists who demanded a six-month moratorium on the development of these programs. “Artificial intelligence labs have entered an uncontrolled race to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict, or control,” the letter warned.