In a few weeks, the European Commission has stepped on the accelerator in terms of controls of content generated by artificial intelligence (AI). To such an extent, that there is beginning to be some confusion about who is proposing what and in what format and for when: whether a global voluntary code of conduct, whether an agreement prior to the legislation in the process of being approved —the AI Law currently under negotiations in the European Parliament— or, this Monday, a commitment from digital platforms to clearly identify, and “immediately”, the content generated by AI to combat disinformation. Despite the hubbub, one thing remains clear: Brussels is very concerned about the potential “dark side” of these new technologies, especially generative AI such as ChatGTP, and time is running out to try to put a stop to it.
“New (generative) AI technologies can be a force for good and offer new possibilities for increased efficiency and creative expression. But they also have a dark side: they pose new risks and a potential for negative consequences for society in terms of the creation and dissemination of disinformation”, explained the Vice President of the European Commission for Values and Transparency, Vera Jourová.
For this reason, he has announced, he wants generative AI to have a “separate and dedicated” path within the voluntary code of good practice to combat disinformation reinforced a year ago, but which still did not foresee the strength of generative AI, and to which 44 companies are already signatories, including Facebook, YouTube, Google or TikTok, although one of the most powerful, Twitter, has just abandoned it.
On the one hand, Jourová proposes that those signatories of the code against disinformation that integrate generative AI into their services, as is the case, he said, of Microsoft’s Bing Chat or Google’s Bard, create the “necessary safeguards” to prevent “malign actors” from using their systems “to generate disinformation.”
In addition, the signatory companies must create a technology that “recognizes” this type of content in their system and that “clearly indicates to users” that it has been created not by a person, but by a machine, putting a kind of label warning about content created by AI. “Our main task is to protect freedom of expression, but I don’t see why machines should have freedom of expression,” said the Czech commissioner.
European legislation relating to new technologies is advancing by giant steps: on August 25 the Digital Services Directive (DSA) will enter into force, which establishes transparency and access obligations on the algorithms of the large digital platforms (those with more than 45 million users, 10% of the European potential market). This regulation proposes, among other things, the rapid withdrawal of illegal content, the protection of fundamental rights (restrictions on the use of data based on race or religion) or fines that, in the case of technological giants, can reach up to 6% of their global income.
The DSA already provides for the use of a kind of labeling, to “ensure that an item of information, whether it is a generated or manipulated image, audio or video that remarkably resembles existing people, objects, places or other entities or events and that may mislead a person to believe that they are authentic or truthful, is distinguished by prominent indications when it is presented on its online interfaces”.
The future AI Law also provides for higher transparency requirements for generative AI models, which must make it clear that their contents have been generated by AI.
But for the DSA to fully enter into force, there are still almost three months to go and, in the case of the AI Act, even years: the European Parliament must approve its position in plenary next week and only then will negotiations begin to close, with the Member States, a common text that Brussels does not believe can fully enter into force before 2026. All this, when technology is advancing non-stop.
For this reason, Jourová has proposed this Monday that the label for content generated by AI be implemented “immediately” among the signatories of the code against misinformation, in a way of anticipating what will already be mandatory from the end of August.
“The label is a faster tool, we want the platforms to label the AI production in such a way that a normal user, who is distracted by many things, clearly sees that it is not a text or image created by real people, but rather a robot that is speaking”, he explained. “It is important that there is speed, with immediate labelling, and clarity”, she has insisted.
Proof of the strong rhythm that Brussels wants to impose is the fact that the main person responsible for the IA Act, the Internal Market Commissioner, Thierry Breton, is promoting the so-called IA Pact, or IA Pact, a “voluntary commitment” of companies to comply in advance with the maximum possible standards provided for in the AI Law.
The Pact, which was agreed in May during the visit to Brussels by the head of Google, Sundar Pichai, has already been officially presented to the ministers of the Twenty-seven branch and is expected to be ready in the last quarter of the year so that it can be implemented until 2026. As Breton also explained to a group of journalists on Monday, it is a kind of “antechamber” to the AI Law that will allow companies that sign up to it to have “direct access” to the Commission teams that they will prepare the entry into force of the new regulation on AI, so that they “understand what it implies and what habits they have to change”, such as the impossibility of introducing into the EU technology that allows the so-called social credit systems or those of biometric identification in public spaces such as facial recognition.
Twitter’s “mistake”
Two weeks after Twitter announced its withdrawal from the EU code of good practice against disinformation on the Internet, which contains some 40 recommendations for better cooperation with information verification services and thus prevent the spread of false information, Commissioner Jourová has described the decision as a “mistake” that could have serious consequences for Elon Musk’s online giant, she has threatened in a veiled way.
“We believe that it is a mistake on Twitter, it has chosen the hard way, confrontation,” lamented the head of Transparency. Although she has recognized that the code is voluntary, she has immediately indicated that leaving it has its consequences because, in the long run, “whoever wants to operate and make money in the European market will have to adjust” to European laws.
“Make no mistake: by abandoning the code, Twitter has attracted a lot of attention and its actions and compliance with EU law will come under rigorous scrutiny,” he warned.
You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.