It’s been an important week for artificial intelligence. The Giants Google and Microsoft, locked in fierce competition, each announced separately that they are integrating an AI assistant into their most popular applications, from Gmail to Word. An algorithm will write your emails; Google Docs will be able to rewrite your worst paragraphs, and a co-pilot, as they call him at Microsoft, will know how to analyze your Excel tables to tell you in words what he finds.
Also, of course, came GPT-4.
The language model that drives ChatGPT, the application responsible for catapulting this new technology to mass popularity, has just been updated. GPT-4 is more powerful—that is, smarter—and adds new features. It comes with an impressive résumé: this version is capable of passing a ton of college-level exams, in writing, reading, or math, often with scores that would put him in the top 10% of the class.
There is many examples of their capabilities. GPT-4 can look at a photo of a hand-painted schematic and create a simple website; can programming small video games either summarize and compare long articles. I passed him the ‘Think clear’ logic testwhich he did very well.
But today I want to focus on a characteristic of these artificial intelligences that is misunderstood. It is presented as a limitation of GPT-4 and other similar models, when it is, in my point of view, a cause for astonishment.
Differentiate between capabilities and goals
A common criticism of these artificial intelligences is to say that they are “only” statistical models that predict the most likely word to continue a text. They do nothing else, their critics say, they are nothing spectacular.
It’s the other way around.
It is true that GPT-4 has learned like this, to a large extent. His training consisted of playing a very simple game: given a sequence of text, he had to predict the next word (or token) and hit as often as possible. For example, when faced with the phrase “Benidorm is a destination for sun and ___”, you would surely win by saying “beach”. His learning consisted of playing that game, over and over again, with one sentence and then another, until he used almost all the text on the internet and ended up being terrific at that tiny game.
Therefore, yes, GPT trained with the aim to predict words, but their abilities that emerged of that learning can be many others. Language, logic or reasoning skills. And it’s not easy to know exactly which ones you have, because what happens inside a neural network is largely unknown.
To appreciate the difference it helps to understand how does GPT work and how is its learning. Large language models are a type of neural networks, mathematical structures that can be seen as a huge blank slate, with billions of possible connections between nodes, connecting their inputs (one text; sometimes images) with their outputs (other text). Imagine a machine pinball gigantic, where the entrances are the balls that bounce throughout the structure to, according to their properties, end up in certain exits. The network is a flexible structure capable of generating enormously complex functional forms.
But it is important to understand that no programmer alters that structure by hand. It does not insert concrete rules.
So how is this network organized to function? Where does the specific configuration come from that makes GPT-4 know how to add, pretend to be angry or understand what adolescence is? Almost all of this appears in the training that I described earlier, an autonomous process, which justifies the term “artificial intelligence.” Taking the risk of simplifying, learning consists of playing the game of predicting the next word in a sentence millions of times, adjusting each time one of the parameters or weights that define the network, in an old mathematical optimization process. “In the end, it’s about determining which weights will best capture the training examples that have been given,” summarizes Stephen Wolfram. This iterative procedure weaves the giant skein that is the neural network so that it is very good at predicting words, because that is an objective function.
Where is the misunderstanding you were talking about? To think that this is only What does GPT-4 or any neural network do? Your objective function defines your goal, yes, but not your capabilities. In fact, any skill that is useful for predicting words can potentially emerge within the network. If it helps him to know how to multiply, reason logically, or think like a child, he could—perhaps—learn all of these.
And that is the reason for my fascination.
Of course, this does not mean that these artifacts are alive, that they have consciousness or anything like that. Nor that they are intelligent in the way that we humans are. but they are emerging: algorithms capable of having abilities that nobody put in them, but that appeared pushed by an improvement process in pursuit of a goal. A simple one—predicting words—but enough for them to learn to write bad poetry and paint beautiful landscapes.
If you want an exaggerated simile, do you know what is also the result of a similar emergent process? We humans. The theory of evolution says that the engine of life is to maximize one result: to survive and to propagate genes. It’s a limited goal, really, neither lofty nor romantic, but it’s been enough to bring us here and push us to paint on walls, gaze at the stars, and think about sleeping babies.
PS This text is an honest attempt to share my impressions about a technology that interests me, and that I studied two decades ago, although I am no expert. If you want a less enthusiastic view, skeptics have celebrated Noam Chomsky’s article: The false promise of ChatGPT (English). I also recommend this from Ezra Kleinbetween fascination and fear, and if you want mathematical details, This from Stephen Wolfram.
other stories
???? 1. How much does your municipality invest in education?
My colleagues have delved into the settlement data of all the town halls in Spain to investigate how much they invest in schools, classes and other items related to education. There are cities like Pamplona that spend 20 times more than Almería: almost 500 euros for each child or young person, which is 10% of their entire municipal budget.
In the article you can consult the expenditure of your municipality with an interactive map.

???? 2. We did well predicting the Oscars
Last week, we hit the awards for best film, direction, animation. Also with three of the four awards for actors and actresses, all except the one we saw as the most doubtful, the statuette for best supporting actress, which was almost a toss-up between Angela Bassett (37% probability) and Jamie Lee Curtis ( 35%), and that ended up in the hands of the second.
???? 3. Question: Was GPT-4 in the news?
With the publication of the new language model, one of my predictions for 2023 may come true, although I have not yet checked if it was on the news. Have you seen him there?
You help me? Forward this newsletter to whoever you want, and if you are not subscribed, sign up here. It is an exclusive newsletter for EL PAÍS subscribers, but anyone can receive it during a trial month. You can also follow me on Twitter, at @kikollanor write me with clues or comments, to kllaneras@elpais.es.
Subscribe to continue reading
Read without limits