In a B-movie, the sociological crisis generated by ChatGPT would be resolved when the policeman discovers that there is a Martian inside the machine. If the tape were A-series, the policeman would fall in love with the machine, just as researcher Kate Darling predicts. But in both cases we would be dealing with alien intelligence, to call a non-human mind that. The best science fiction writers have speculated from time to time on this dizzying concept. Is there only one form of intelligence or many, perhaps infinite? Even the keen imagination of that troop has fallen far short in this chapter. Fictional Martians often have not only two arms and two legs, but also something very similar to a human brain. And as long as ET still doesn’t show up, we’re not going to settle the issue.
But now we have ChatGPT and half a dozen other similar systems based on so-called large language models. (large language models, LLM). Although these models are loosely inspired by the human brain, the truth is that they work in a very different way. A baby doesn’t need to swallow a billion cat photos to learn to recognize a cat, but that’s just how the machine learning that’s all the rage these days works. Language models do not swallow images, but texts, obsessively recording which words tend to appear next to which others. His way of building sentences is not based on learning concepts or grammar rules. It’s more of an exercise in statistical strength. ChatGPT doesn’t understand the concept of a verb, but he uses it correctly because he sees how we earthlings of flesh and nerve use it, and only with that the sentences come out well. But the machine doesn’t know what it’s saying (and no, I’m not going to make the obvious joke of the commentator).
But now notice two things. The first is that the raw data with which our brain builds an internal model of the world is not much brighter than what the robot uses. We open our eyes and see what is ahead with such ease that we do not think about the monumental problem of natural engineering that this entails. In a strict sense, all we see are lines of different orientations. It is our visual brain that abstracts those lines into angles, polygons, polyhedrons and a grammar of shapes that allows us to understand the scene in front of us. We cannot lightly rule out that the machine can form something similar to a concept despite the fact that its starting material is as modest as the closeness between words.
The second thing is that these language models are just one small piece of the overall artificial intelligence project, which is to design systems that can reason, plan, and solve problems. Paradoxically, this will mean engineers reviving old approaches to the discipline, which relied on teaching the robot rules and symbols, rather than making it gobble up Wikipedia and the National Library before breakfast. Historian Yuval Noah Harari believes that ChatGPT is a threat to civilization. He hasn’t seen anything yet.
Subscribe to continue reading
Read without limits