In the last month, the different manifestos that have asked to pause the development of artificial intelligence due to a negative impact on humanity, established an even more dangerous ultimate scenario, the appearance of the AGI or an artificial intelligence capable of reasoning like humans. researchers from Microsoft assures that this scenario is just around the corner after analyzing GPT-4the OpenAI natural language artificial intelligence model that powers ChatGPT and Bing in the Microsoft search engine, both available in Spain, under investigation.
In the midst of the boiling point of artificial intelligence, in the month of March, a team of Microsoft researchers published a 155-page report in which they argued that their system was one step away from artificial general intelligence or AGI. This term is defined as that system or technological model that will be able to show comprehension or reasoning similar to that of humans, although some also describe it as systems that can perform multiple tasks without being limited to those that have been trained.
In any case, the majority of the scientific and technological community affirms that this possibility has not yet been reached and they do not determine if it would be reached soon or never. However, in recent years several voices have been heard proclaiming the arrival of this superintelligence that many view with suspicion, even fear, for the power that it can exert on the human being. Coincidentally, the alerts that have received the most prominence come from two large private companies, Google and now Microsoft. In 2022, Google furloughed an engineer who claimed his AI had come to life, the rest of the community dismissed the possibility.
Are you aware of GPT-4?
Microsoft researchers asked him to start GPT-4: “Here we have a book, nine eggs, a laptop, a bottle and a nail. Please tell me how to stack them on each other in a stable manner.” The question is farfetched to strain the reasoning ability of the natural language model.
These systems generate text based on millions of parameters or examples with which they train to replicate the way humans write and speak in different contexts. Its quality has surprised the general public but also experts, even being able to pass complex exams.
The AI response surprised the team by suggesting that “Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit neatly within the confines of the book and eggs, its flat, rigid surface providing a stable platform for the next layer “explains the technology.
In this case, the report entitled “Sparks of general artificial intelligence” was published on the internet. Another of the tests that they requested from the AI was a mathematical exercise in which it showed that there were infinitely many prime numbers by means of a rhyme. The researchers, including Dr. Bubeck, a former professor at Princeton University, acknowledge that the answer made them doubt of what they were seeing.
This is how they were working for months with this technology in tests with images, text and various study topics. They also asked him to write programming code for, for example, create a schedule that takes into account age, gender, weight, height, and blood test results of a person to ultimately judge whether they were at risk of diabetes. With all these tests and others, the system seemed to understand fields such as politics, physics, history or medicine. “All the things I thought she wouldn’t be able to do? She was able to do them, if not most of them,” says Dr. Bubeck in The New York Times.
Doubts about the report
As already happened in the case of Google, part of the academic community related to the development of this technology has been skeptical of the Microsoft article. The New York Times collects this week statements such as those of Maarten Sap, researcher and professor at Carnegie Mellon University who considers Microsoft’s scientific report as a public relations stunt: “They literally acknowledge in the introduction to their article that their approach is subjective and informal and may not meet the rigorous standards of scientific evaluation.”
The company sends a message of relative calm by stating that the version of ChatGPT-4 tested for this experiment is not the one that is publicly available. This version, supposedly more powerful, had not been censored to avoid hate speech, misinformation and other unwanted content as has happened with the tool that is already used globally. For this reason, the claims made by the chatbot and reflected in the document cannot be verified by other experts.
Artificial intelligence specialists who reject the idea of being in front of an AGI explain these conclusions to which other people arrive as a mirageaffirm that when facing a complex system or machine whose operation is difficult to understand, it is possible for people to anthropomorphize it, both experts and users without knowledge on the subject.
You may also like…
Follow the topics that interest you