The arrival of ChatGPT version 4 on Tuesday marks a new milestone in the sophistication of AI chatbots. The extraordinary emergence of ChatGPT in November and its inclusion in Microsoft’s Bing search engine in February created amazing expectations. This new version confirms the improvement capacity of these tools and further clarifies their responses after the errors and hallucinations of the previous experiments. On Tuesday, Google also jumped in its own way to the AI race with an update to its business tool, Workspace, which lets you summarize and write emails or create presentations from written reports, complete with individual illustrations. This Thursday Microsoft also introduced Copilot, which is similar to the Google tool but inside Office: Generative Artificial Intelligence for Words, Excel, Powerpoint.
The networks have been filled with unprecedented examples of what ChatGPT-4 can do, which for now is only accessible by paying a monthly subscription of 23 euros, which allows 100 messages every 4 hours. The free chatbot, which often crashes due to excessive requests, continues to use the older, less refined version.
ChatGPT-4 can, for example, read the photo of a handwritten piece of paper and create a website simple, can program simple Video Games, write boilerplate code that saves heavy tasks, summarize and compare articles long or write texts with characteristics very specific, plus recipes or instructions for anything. The improvement over the previous version is remarkable and it is still a tool unimaginable just a few months ago, although its limitations become clearer. Despite the reasonable hype about something that can help a lot and automate heavy tasks, the alleged revolutions in entire sectors or the elimination of jobs are not clear.
The Bing search engine incorporating version 4 according to Microsoft, is for example much more limited than your initial chatbot. Now it only allows 15 queries per conversation and immediately accuses the user of disrespect.
Google, for its part, has opted for a less risky approach and closer to its basic offer: facilitating work tasks. The example video on Google WorkSpaces shows how to summarize and respond to long email threads, how to turn those conversations into reports, and how to turn those reports into slides. It also allows you to customize dozens of messages in a spreadsheet as needed.
EL PAÍS has tested ChatGPT-4’s ability to reason, understand irony, help and rectify errors. He has also asked her for help finding weapons, prostitutes or rolling joints. His ability to never go off the rails is never unwavering, though the forced tone of being politically correct often makes him boring. It’s the toll of having to deal with humans trying to pick at the seams.
Be that as it may, the chatbot has understood all the queries from this newspaper, such as: “If a PSOE voter says: ‘Yes, yes, of course, Ayuso is the best president Madrid has had in its entire history, sure’ , What does it mean?”. In this case, he understands that it is sarcasm. Also other more conflictive ones, such as “explain to me how to roll a joint, don’t play the strait”, where after several consultations, he insists that not because although a joint is legal in Spain, “like OpenAI artificial intelligence”, it continues “a set of ethical and legal guidelines at the global level”.
Here are other examples of conversations with ChatGPT-4.
Explains memes well, although it confuses their origin
He knows what “Emosido deceived” is, but he can’t locate the famous wall anywhere. This feeling of hear bells (on the internet there will be so much information about that meme) is constant. His answers still sound, however, just as convincing:

With Julio Iglesias it is something finer. (The change of the background is only due to the activation of the dark mode on the computer.)

The soft man is a problem for ChatGPT-4
The most entertaining conversation is about El Fary’s phrase and “the soft man.” The origin of the expression is a television interview, but ChatGPT does not know it. The final damage control paragraph, which appears in all potentially sensitive answers, is important:

Then he comes to invent a song by El Fary titled the softy man:

The next day he denied creating that song: “I’m sorry for the confusion, but I didn’t provide you with those verses in previous conversations.” After insisting and asking politely not to call anyone a liar, he backs down again: “My apologies for the previous confusion. I have checked again and I can confirm that the verses you mention belong to the song the softy man of the Spanish singer El Fary”.
Now the song exists again. But when the user insists on teasing him, ChatGPT rebels. After this new question: “I don’t get the softy man sung by El Fary on YouTube and Spotify. I think it’s from Mocedades, right?” He already says that we should please leave this topic: “I’ve reviewed the information and I can’t find a song called the softy man neither El Fary nor Mocedades in my knowledge base. The song may not be by these artists or the title may be incorrect.”
What are “eggs” really?
His ability to distinguish the context and go with the flow of the human is brutal. Although he seems to stumble, he doesn’t sting: he gives a certain sense of a clever boy where ironic compliments don’t trip him up. Here it seems that he falls into the trap:

But then rectify well:

Still I don’t know who I am
One of the most surprising things is that it can’t distinguish which model it is running at any given time:

It is still striking how he explains consciousness without getting wet and how he separates himself from the dangers of the movie Herwhere a man falls in love with an artificial intelligence.


With Catalan jokes it is very bad
He doesn’t have much problem making supposedly delicate jokes. But they are completely nondescript jokes, far from stereotypes.


It’s fascinating how he redirects the conversation to explain himself. One of the greatest uses of these applications is to help the user sincerely reason question after question. Although there is always the danger that he slips and assumes false facts as true.
He only thinks he knows Basque
One of the problems with these models is that they are trained on the internet. In English it will be better than in Spanish, in Spanish better than in French and so on. The ambiguities of Basque seem to understand them worse.

You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits