The leaders of Open AIcreators of ChatGPT and Dall-E 2, have spoken in a statement about how dangerous it may be for humanity to create a super artificial intelligence (AI). They allege that something so powerful could lead to “self-destruction” and appeal to the regulation of the use of these.
The main measure that the developers have proposed is to create an institution similar to the International Atomic Energy Agency (IAEA), which is in charge of monitoring that nuclear energy is used in a controlled manner. Therefore, OpenAI contemplates the figure of a “watchdog” in charge of regularizing the use and development of artificial intelligence for the safety of society. It should be remembered that the advance of AI seems uncontrollable and companies like BT have already announced massive layoffs in favor of Artificial Intelligence.
Greg Brockman and Ilya Sutskever, co-founders of Open AI: «Superintelligence will be the most powerful technology that humanity has had to deal with in its history»
OpenAi says AI will rule a…better world?
Greg Brockman and Ilya Sutskever, co-founders of the company, together with chief executive Sam Altman, warned in the note published on their website that in the next 10 years AI (Artificial Intelligence) they will acquire such prowess that they will surpass the master skill of most domains and they will carry out a productive activity comparable to that of some of the largest corporations of today. “All this can portend a prosperous future, but it also implies the possibility of existential risk. Ultimately, superintelligence will be the most powerful technology humanity has ever had to deal with in its history.”
In the same note, the three workers asked for the coordination of the companies that are currently working on AI research in the short term. The aim is to ensure the development of models that can be integrated into society and, at the same time, that guarantee its safety. Although for this project the collaboration of the government is requested in order to create a collective agreement.
On the other hand, the Center for AI Security (CAIS), who work to “reduce the risks of AI on a societal scale”, has also ruled on the matter. They have gone further by describing a future where humans “we will lose the ability to govern ourselves” depending completely on machines; all this in favor of a small group of people who controlled the “powerful systems” thus making AI a centralizing force plunging humanity into “a caste system between governed and rulers.”
Nonetheless, OpenAI ensures that the development of AI will “lead to a better world” with results that can already be seen in areas such as education or some jobs. They also warn that pausing research could be dangerous and conclude by stating that “continued development of powerful systems is worth the risk.”