Talking about artificial intelligence is fashionable. ChatGPT, Open Assistant, Bard and other highly relevant models constantly appear in our daily conversations. However, just because it’s trendy doesn’t mean artificial intelligence is something new. In fact, it has been around for many years now, albeit in applications less dazzling than the large language models.
What is new is the exponential explosion in the capabilities of these systems and the democratization of artificial intelligence that is taking place with its irruption, from the double perspective of users and developers. That is why the recent call by Elon Musk and other technologists to pause the development of “systems more capable than GPT-4” has caused me some astonishment. A Non-Proliferation Treaty with the objective of preventing the spread of nuclear weapons makes all the sense in the world, since one cannot go to a shopping center and buy centrifuges to enrich uranium. However, anyone with a minimum of technological knowledge can buy space on a cloud server, and start developing artificial intelligence systems based on open source libraries and publicly accessible training data sets. Therefore, prohibiting the development of artificial intelligence applications, without going into whether it is a good idea or not, does not seem feasible to me. A different matter is to prohibit or limit certain uses of artificial intelligence, and that is precisely what the future European Law on artificial intelligence intends, among other things.
Rather than trying to put doors on the field, what needs to be done is to regulate and draw red lines that cannot be crossed, while promoting the innovation capacity of European universities and industries in the rest. I am aware that today there are more questions than answers in this area, and therefore, determining what can and cannot be done is an almost daunting task. However, although it is not easy and we still have a lot to learn from AI, the task of the regulator is to define the responsibilities of each of the actors involved in the life of an AI system (because responsibilities cannot be download to the machines), in order to empower oversight agencies to quickly intervene and remove those who intend to make irresponsible use of this technology from the circuit. Regulation must combine solid foundations with the necessary flexibility to adapt to the rapid advances that are taking place (and will continue to take place in this field).
For this reason, my proposal is that we address the governance of artificial intelligence in the same way that the regulation of commercial aviation was addressed in its day. In other words, with rigorous international standards in terms of safety, regardless of costs and with a constant process of improvement and updating, in which professionals not only learn from accidents (which fortunately are increasingly rare in commercial aviation). but of any small incident or error.
He Chicago Convention, which created the International Civil Aviation Organization (ICAO) nearly 80 years ago, set out a regulatory framework for international governance and rigorous technical standards, which ICAO member states must develop into laws in their respective jurisdictions and which airlines must adhere to the letter, if they want to fly across international borders. The regional or national supervisory authorities of the ICAO member states (in the case of the European Union, the European Aviation Safety Agency – EASA is also included) only grant their permits after a long cascade of certifications.
Pilots only get their flight licenses after very rigorous training and the same applies, at their respective level, to the mechanics who check the planes or air traffic controllers. Aircraft can only be sold and put into operation after passing countless tests, checking every last nut or bolt. And even the solvency and capacity of the airline management teams is reviewed and certified, because a high-risk activity like this cannot be left in the hands of a team of people who do not demonstrate sufficient capacity and experience. This is pure common sense.
It is evident that the aviation fanatic who builds his own plane is not the same as Airbus or Boeing manufacturing models of commercial airplanes in which millions of passengers will travel; The fanatic of computers that builds an artificial intelligence system for their own use is not the same as a company or a State that implements an artificial intelligence system that will have an impact on the lives of thousands or millions of people. It is in this case when regulation and the existence and adoption of standards is particularly important.
Thorough review of algorithms
I find it very interesting to mention that the commercial aviation approach is based on data (data-driven approach) since, of course, you don’t wait for a plane to crash to look at its black box. Airlines, under the supervision of inspectors and authorities, carefully analyze the slightest noise or anomaly in the data recorded by the systems during the flight. In other words: everything is processed and compared, over and over again, so that flying by plane is a very low-risk activity and we all benefit from this very intelligent ecosystem. Those of us who get on a plane, to be able to arrive safely at our destination; companies and professionals in the sector, to earn a decent living. Note that, contrary to the usual discourse of technology companies, in commercial aviation, safety is never computed as a cost or as a barrier to innovation, but rather as the condition without which the business itself would immediately cease to exist.
In this sense, I welcome the fact that the artificial intelligence regulation proposal is based on a risk analysis system and provides for a series of requirements applicable to high-risk AI systems, in particular to system providers, such as the obligation to draw up an EU declaration of conformity and to place the CE marking of conformity. These certifications, logically, must be complementary to the data protection certifications, seals and marks, which of course must also be applied with the same rigor to those systems that process personal data. In addition, in the same way that mechanical inspections are expected in airplanes from time to time or when certain parameters are verified, artificial intelligence systems must undergo regular mandatory audits in which, as if they were the nuts and bolts of an aircraft, the algorithms and data behind its operation are reviewed by inspectors and mechanics to ensure the safety of the system and prevent accidents. And it is not worth that that information is protected by alleged intellectual property rights. For the competition, it can be; for the inspector, never.
In short, if as everything seems to indicate, artificial intelligence will continue to advance at a dizzying rate in the coming years, replacing people and allowing machines to make decisions that will affect our lives on a daily basis, a natural and basic intelligence he advises us to copy the successful model of those who make a living safely transporting people thirty thousand feet high and speeds over five hundred miles.
Similarly, in the same way that passengers can claim damages in the event of a plane crash, there needs to be clear and specific rules in the field of AI in the event of accidents; that there will be, above all, at the beginning. In this sense, I applaud the fact that a proposal is being processed at a European level on liability in the field of artificial intelligence, although it would be desirable for the AI regulation itself to also include a right to appeal for damages caused by artificial intelligence systems. Only then will effective enforcement and enforcement of AI laws be ensured.
It is in our power, therefore, now that this sector is still in its infancy, to lay the pillars of safe artificial intelligence, with equivalent international standards, which transmits the necessary confidence to citizens and contributes positively to the progress of humanity.
So, ladies and gentlemen, fasten your seatbelts, artificial intelligence is coming.
Leonardo Cervera Navas is the director of the European Data Protection Supervisor
You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.