Doubts about the impact of the artificial intelligence application created by the OpenAI company, ChatGPT, have reached Spain. After the suspicions that it has aroused in the governments of Italy, the United States and China, the Spanish Agency for Data Protection (AEPD) has announced this Thursday that “it has initiated ex officio prior investigation proceedings against the American company for a possible breach of regulations”, according to a statement from the Spanish state entity.
The suspicions are that the platform could illegally collect user data, the conversations they have with the machine to train the algorithms and the payment information of its subscribers, in addition to presenting vulnerabilities in its protection against possible attacks and lacking effective filter to verify the age of users.
As reported by the AEPD, the initiative arose last week, when the Spanish agency requested the European Data Protection Committee (EDPB) “to include the ChatGPT service as a topic to be addressed in its plenary meeting” . The Spanish body considers that “global treatments that can have a significant impact on the rights of individuals require harmonized and coordinated actions at the European level in application of the General Data Protection Regulation.”
The proposal has been picked up by the EDPB, of which the Spanish agency is a part, at the meeting they held this Thursday and it has been decided to create a working group for cooperation and the exchange of information between the entities that are part of the Committee European Data Protection.
In this way, the Spanish investigation will not be singular and individual, but will form part of that undertaken by the rest of the entities responsible for data protection.
The AEPD qualifies that the investigation is not opened against “the development and implementation of innovative technologies such as artificial intelligence”, but to guarantee “full respect for current legislation”. “Only from this starting point can a technological development compatible with the rights and freedoms of the people be carried out”, he concludes.
At the end of March, Italy decided to block the use of ChatGPT for breaching data protection regulations and became the first European country to adopt drastic measures. The measure will be lifted when it shows that it complies with Italian privacy regulations.
France, Ireland and Germany immediately requested information from the Italian authorities and, after contacts, the German data protection commissioner was also in favor of blocking ChatGPT due to the risk to data security.
For its part, the National Commission for Informatics and Liberties (France’s privacy watchdog) announced, as Spain has now done, that it is investigating the application for complaints filed by users.
The European Commissioner for the Internal Market, Thierry Breton, had already warned of Brussels’ doubts about the development of this technology and announced that content created by artificial intelligence must carry a specific warning about its origin. “In everything that is generated by artificial intelligence, whether text or images, there will be an obligation to notify that it has been originated by them.”
All the European doubts have come together now that the EDPB has decided to create the research working group of which the Spanish Agency for Data Protection is a part.
On Tuesday, the Joe Biden Administration established a 60-day period to gather ideas on how to legislate against the unwanted effects of these programs based on artificial intelligence and that can pose a risk in fields as disparate as privacy, disinformation or working market.
“It’s amazing to see what these tools can do even in their early stage. We know that we need to put some safeguards in place to make sure they are used responsibly,” said Alan Davidson, director of the National Telecommunications and Information Administration and promoter of the US legislative initiative.
The Cyberspace Administration of China also submitted a regulatory proposal on Tuesday to regulate generative artificial intelligence services. According to the text, the Government of Beijing intends that the companies of these technologies present security evaluations to the authorities before launching their products to the public, according to picked up Reuters. The rules drafted by this regulator indicate that providers will be responsible for the legitimacy of the data used to train their products and for taking measures to avoid discrimination when designing algorithms and using that data.
On March 29, more than a thousand specialists, including Elon Musk, Apple co-founder Steve Wozniak and historian Yuval N. Harari, called in a joint letter for a six-month moratorium on the development of these programs. . “Artificial intelligence labs have entered an uncontrollable race to develop and implement increasingly powerful digital minds, which no one – not even their creators – can reliably understand, predict or control,” the letter warned.
artificial intelligence generative is a computer application capable of creating text, images, video or music and whose potential use has led Microsoft to invest 10,000 million dollars in OpenAI, the company that launched ChatGPT, to automatically generate content in all its products, from the word processor to email.
The difference with other previous services is that this new AI (artificial intelligence) capability allows you to create from scratch and not just recognize patterns or search existing information to produce a result. Content ranges from original product designs to the creation of new digital photographic works or music. Another of its greatest capabilities is its potential for the development of services, with the ability to discriminate complex requests and create a solution based on numerous factors. American communication company buzzfeed shot up the stock market by announcing that it will use it to personalize its content.
And that’s just one usage example. Design firms of all kinds can use it to create new models faster; pharmaceutical companies, to generate new compounds; doctors, to refine diagnoses; or the production companies, to create high-quality video content tailored to the requirements of the public.
But, compared to these possible advantages, its use generates threats of malicious use for hacking, as well as legal, ethical, reputational doubts and risks of generating false content. Likewise, its use implies a high energy cost and, therefore, pollution.