Elon Musk and Steve Wozniak, the co-founder of Apple, have just signed a letter calling for a six-month moratorium on training artificial intelligence (AI) systems. The purpose is to give time for society to adapt to what the signatories call “an AI summer”, which, in their opinion, will end up benefiting humanity, as long as the appropriate safeguards are put in place. Some barriers that, among other things, must include rigorously monitored security protocols.
It is a laudable objective, but there is something even better that should be done in these six months: remove the hackneyed label of “artificial intelligence” from public debate. This term must be relegated to the same heap of history as “iron curtain”, “dominoes” and the “Sputnik moment”.
AI has always been a project of the military, industry, and elite universities and continues to be so, even though its access has now been democratized. As a term, “artificial intelligence” survived the end of the Cold War thanks to its appeal to science fiction enthusiasts and investors. But we can afford to hurt their feelings. Why are we going to keep reliving the traumas of the Cold War, when this term is such a great corset for our imagination?
If you want to support the development of quality journalism, subscribe.
subscribe
In reality, what we call “artificial intelligence” today is neither artificial nor intelligent. Early AI systems were heavily dominated by rules and programs, so at least the word “artificial” was justified. But today’s systems, like everyone’s favorite ChatGPT, are not based on abstract rules, but on the work of real human beings: artists, musicians, programmers, and writers, whose creative and professional work is appropriated by those systems with the excuse of wanting to save civilization. In any case, it is a “non-artificial intelligence”.
As for “intelligence,” the overriding Cold War interest, which funded much of the early work in AI, greatly influenced our meaning of it. It’s the kind of intelligence that would be useful in a battle. For example, the best thing about current AI is its ability to look for patterns. Not surprisingly, since one of the first military uses of neural networks—the technology on which ChatGPT is based—was detecting ships from aerial photographs.
However, many critics have pointed out that intelligence is not just about looking for patterns or following rules. The ability to generalize is also important. The work of Marcel Duchamp Fountain, from 1917, is a good example. Before Duchamp, a urinal was just a urinal. But Duchamp changed the perspective and turned it into a work of art. At the time, he was generalizing about art.
When we generalize, emotion overrides entrenched and apparently “rational” classifications of everyday ideas and objects. It suspends the usual, almost mechanical, pattern-finding operations. It’s not something you want to do in the middle of a war.
Human intelligence is not one-dimensional. He relies on what the Chilean psychoanalyst Ignacio Matte Blanco called biological: a fusion between the static and timeless logic of formal reasoning and the contextual and highly dynamic logic of emotion. The first looks for the differences; the second erases them at full speed. Our mind knows that the urinal is related to the toilet; not our hearts Biology explains how we rearrange mundane things in new and illuminating ways. We all do it, not just Marcel Duchamp.
AI will not be able to do this because machines cannot have a sense (not mere knowledge) of the past, present, and future. Without that sense, there is no emotion, which eliminates one of the components of the biological. As a consequence, the machines remain trapped in singular formal logic. So that’s what the “intelligence” part is reduced to.
ChatGPT has its uses. It is a predictive engine that can also serve as an encyclopedia. When asked what a wine rack, a snow shovel and a urinal have in common, he correctly answers that they are everyday objects that Duchamp turned into art.
But when asked what current objects Duchamp would turn into art, he replied that the smartphones, electronic scooters and masks. Here there is no glimpse of anything biological or, let’s face it, of “intelligence”. It is a statistical machine that works well but is boring. It has its uses, of course. But then the real debate should be on the extent to which we depend on statistical thinking, rather than on the advantages of “artificial intelligence” over “human intelligence” or man over machine.
The danger of continuing to use a term as inaccurate and outdated as “artificial intelligence” is that we risk being convinced that the world works according to a singular logic: that of deeply cognitive and unemotional rationalism. In Silicon Valley there are already many who believe so and are dedicating themselves to rebuilding the world inspired by that conviction.
But the reason tools like ChatGPT are capable of doing anything minimally creative is that their training patterns were created by real human beings, with their complex emotions, heartaches, and all. And, in many cases, it’s not the market — let alone Silicon Valley venture capital — that has paid for it. If we want that creativity to continue, we need to fund the production of art, fiction, and history, not just data centers and machine learning.
At present, things do not appear to be heading in that direction. The biggest danger of not retiring terms like “artificial intelligence” is that it blinds you to the creative work of intelligence and, at the same time, makes the world more predictable and stupid. This term, with its apolitical and progressive character, makes it more difficult to discover the motives of Silicon Valley and its investors; And, when push comes to shove, his motives don’t always coincide with people’s.
So instead of spending six months looking at algorithms while we wait for the “summer of AI,” we’d better reread The dream of a nigth of summer, from Shakespeare. Thus we will contribute much more to making the world a smarter place.
sign up here to the weekly newsletter of Ideas.
Subscribe to continue reading
Read without limits