Last month, days before the coronation of King Carlos III on May 6, a profile request made to ChatGPT yielded a striking result.
The artificial intelligence (AI) chatbot from firm OpenAI noted in a paragraph:
“The coronation ceremony took place at Westminster Abbey, London, on May 19, 2023. The abbey has been the site of the coronations of British monarchs since the 11th century, and is considered one of the holiest and most iconic places in the country.”
this paragraph presenteither what is known as a “hallucination”.
In the AI environment, it is called that way. information provided by the system that, although written in a coherent manner, presents incorrect, biased or completely erroneous data.
The coronation of Carlos III would take place on May 6, but for some reason ChatGPT came to the conclusion that it would take place on May 19.
The system warns that it can only generate answers based on information available on the internet until September 2021, so it is possible that it could fall into this type of problem when offering an answer to a query.
“GPT-4 still has many known limitations that we are working to address, such as Social prejudices, hallucinations and conflicting indications“, explained OpenaAI in its launch of the GPT-4 version of the chatbot, last March.
But it is not an exclusive phenomenon of the OpenAI system. It also features in Google’s chatbot, Bard, and other similar AI systems that have gone public recently.
A few days ago, journalists from The New York Times put ChatGPT to the test about the first time the newspaper published an article on AI. The chatbot offered several answers, some of them made up of wrong data, or “hallucinations”.
“Chatbots are powered by a technology called the large language model, or LLM, which learns its skills by analyzing massive amounts of digital text pulled from the internet,” the authors of the paper explained for the paper. Times.
“By identifying patterns in that data, an LLM learns to do one thing in particular: guess the next word in a sequence of words. Acts as a powerful version of an autocomplete toolthey continued.
But because the web is “full of false information, technology learn to repeat the same falsehoods“, they warned. “And sometimes chatbots make things up.”
take it with a grain of salt
Generative AI and reinforcement learning algorithms are capable of processing a huge amount of information from the internet in seconds and generating a new text, almost always very coherent, with impeccable writing, but which must be taken with cautionthe experts warn.
Both Google and OpenAI have asked users to keep this consideration in mind.
In the case of OpenAI, which has an alliance with Microsoft and its Bing search engine, they point out that “GPT-4 has a tendency to ‘hallucinate’i.e. ‘produce content that is nonsensical or false in relation to certain sources'”.
“This tendency can be particularly damaging as models become more and more convincing and credible, leading to users becoming overly trusting of them,” the firm clarified in a document attached to the launch of its new version of the chatbot.
Users, therefore, should not blindly trust the answers it offers, especially in areas that involve important aspects of their lives, such as medical or legal advice.
OpenAI notes that it has worked on “a variety of methods” to prevent “hallucinations” from being thrown into responses to users, including evaluations by real people to prevent misdata, race or gender bias, or spread of misinformation. fake news.
Remember that you can receive notifications from BBC Mundo. Download the new version of our app and activate them so you don’t miss out on our best content.