The big technology companies that now dominate the artificial intelligence (AI) market, such as Google, are not only concerned about having outstanding technical advances in the field, they also seek to gain a voice and vote in the process of regulating the scope and uses of their developments, a priority for governments around the world.
This Wednesday, Google CEO Sundar Pichai announced a voluntary agreement with European Union lawmakers to establish a provisional set of rules and standards around AI development.
The agreement, now known as the “IA Pact,” It comes at a time when the EU authorities are working on the creation of a formal and legally enforceable regulatory framework to control the development and use of artificial intelligence in that region.
Pichai met with Thierry Breton, the European Union’s internal market commissioner, who issued a statement after the meeting with Google, saying: “There is no time to lose in the AI race to build a secure online environment.”
In a subsequent report, the commissioner’s office said the European Union wants to be “proactive” and work on a pact “involving all major European and non-European AI players on a voluntary basis” before legislation around it is formalized. of artificial intelligence in that region.
The EU commissioner assured via Twitter that “AI technology evolves at an extreme speed. Therefore, we need a voluntary agreement on universal rules for artificial intelligence now”, and alluded to the work plan announced by the countries of the G7 around this nascent industry.
So far, no further details have been given about what this pact agreed today by Google and the EU implies. The Mountain View company is the only one that has made public its intentions to participate in it.
The reality is that Google’s announcement draws attention, especially when we consider that it was made public after the leaders of OpenAI issued a statement in which they made it clear that they do not trust any existing authority to control the rapid progress that systems based on in AI have demonstrated in the last few months.
Google and company urgently need to regulate AI: it’s all about business
OpenAI founder Sam Altman, president Greg Brockman and chief scientist Ilya Sutskever admit that developments like ChatGPT and other artificial intelligence-based platforms need to be regulated, a mission for which, they believe, it is necessary to create an international regulatory body.
Concerns about what it would mean to leave the advance of AI to fate without the oversight of a universal regulatory body have been the subject of many nations and specialists in the field.
The examples are many. From the publication of a letter signed by several authoritative voices in the world of technology led by Elon Musk that called for suspending advances in AI to not establishing clear rules that would protect society from the possible harmful impacts of this technology, to the ChatGPT ban in Italy, which triggered a series of investigations on the matter.