Google and the European Union seek to establish voluntary standards for the regulation of Artificial Intelligence (AI), before specific legislation enters into force. The European Commissioner for the Internal Market, Thierry Breton, and the CEO of Google, Sundar Pichai, recognize the importance of reaching agreements prior to the implementation of the new regulations. However, Sam Altman, CEO of OpenAI, has expressed concern, suggesting that his company could stop operating in the European Union if laws are introduced that they consider “too authoritarian.”
Read more
Altman stated his position during a lecture at University College London, stating that the company disagrees with the way the regulation will be written. Although he has met with officials from the European Union to discuss the details of the law, Altman believes that these meetings have not been fruitful in terms of discussing the need to regulate the evolution of artificial intelligence.
“We agreed that we can’t afford to wait for the AI law to come into force, and work together with all developers to introduce a voluntary pact,” Breton told European press after speaking with Google’s CEO, Sundar Pichai.
In this way, the chatbot and other services that have the powerful artificial intelligence called GPT-4 would not be available in the European territory.
It is important to note that the European Union considers systems such as ChatGPT high-risk” and requires companies that offer services with this type of system to comply with additional requirements. These requirements include the obligation to report whenever the content is generated by an AI, as well as taking measures to prevent the production of illegal or false content.
Despite their differences with the proposed standards, Altman acknowledges that artificial intelligence can cause “significant damage to the world” and shows its support for a regulation that guarantees the safety of AI and allows people to access its benefits. Appearing before the US Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law, Altman expressed concern about the potential harm that ChatGPT could cause due to misuse of technology, highlighting the lack of regulation. solid about it.
That is why OpenAI has implemented initiatives to make its AI more secure, such as a bounty program for users who detect bugs in the system and a grant program to fund security research.