Professor Emily M. Bender is on a mission: she wants us to know that the apparent wonder of ChatGPT is more of a parrot. Not just any parrot, but a “stochastic parrot”. “Stochastic” means that she chooses the word combinations based on a calculation of probabilities, but she doesn’t understand anything she says. It’s hard to chat with ChatGPT or Bing and be aware that it’s a parrot and only a parrot. But for Bender, a lot of bad things depend on that awareness: “We are in a fragile moment,” she says. And she warns: “We are interacting with a new technology and the entire world needs to quickly balance its literacy to know how to deal well with it.” Her message, in short, is: please, it’s a machine that does one thing very well, but nothing more.
Bender, a computational linguist at the University of Washington, sensed that this could happen as early as 2021, when he published a now celebrated academic paperand on “the dangers of stochastic parrots”: “We didn’t say this was going to happen. We said that this could happen and that we should try to avoid it. It was not a prediction. It was a warning. There we just talked a little bit about how dangerous it is to make something look human. It is better not to imitate human behavior because that can lead to problems”, says Bender, 49, by videoconference to EL PAÍS. “The more people are aware, the easier it will be to see the great language models as simple text synthesis machines and not as something that generates thoughts, ideas or feelings. I think (its creators) want to believe that it is something else, ”he adds.
That false humanity has several problems: “It will make us trust. And does not assume responsibility. He has a tendency to make things up. If it shows a text that is true, it is by chance ”, he assures. “Our societies are a system of relationship and trust. If we start to lose that trust in something that has no responsibility, there are risks. As individuals who interact with this, we need to be careful what we do with our trust. The people who build it need to stop making it look human. I shouldn’t be speaking in the first person,” she adds.
Less potential Terminator
The labor to make them more human is probably not free. Without her, the hype caused by ChatGPT would have been more sedate: it would not have given that feeling of a potential Terminator, of a careful friend, of a visionary sage. “They want to create something that looks more magical than it is. It seems magical to us that a machine can be so human, but in reality it is the machine creating the illusion of being human,” says Bender. “If someone is in the business of selling technology, the more magical it looks, the easier it is to sell it,” he adds.
Researcher Timnit Gebru, co-author with Bender of the parrot article and who was fired from Google for that reason, lamented on Twitter because the president of Microsoft admitted in a documentary about ChatGPT that “it’s not a person, it’s a screen.”
If someone is in the business of selling technology, the more magical it seems, the easier it is to sell it.”
The hype, however, is not only due to a company that has made a chatbot speak as if it were human. There are AI applications that create images and soon videos and music. It’s hard not to hype these developments, even though they’re all based on the same type of pattern recognition. Bender asks for something difficult for the media and the way social media is structured today: context. “You can do new things and still not overdo it. You may ask: is this AI art or is it just image synthesis? Are you synthesizing images or are you imagining that the program is an artist? You can talk about technology in a way that keeps people at the center. To counter the hype it’s a matter of talking about what is really being done and who is involved in building it,” he says.
It must also be taken into account that these models are based on an unimaginable amount of data that would not be possible without decades of feeding the Internet with billions of texts and images. There are obvious problems with that, according to Bender: “This approach to language technology relies on having data at the scale of the Internet. In terms of fairness between languages, for example, this approach is not going to scale to every language in the world. But it’s also an approach that’s fundamentally caught up in the fact that you’re going to have to deal with that internet-scale data including all sorts of junk.”
That crap doesn’t just include racism, Nazism, or sexism. Also in serious pages, rich white men are overrepresented or there are words connoted by widely seen headlines such as “Islam” or the way in which the countries from which immigration comes are sometimes spoken in the West. All of this is at the heart of these models: re-educating them is an extraordinary and probably endless task.
humans are not that
The parrot has not only made Bender famous. Sam Altman, founder of OpenAI, the creator of ChatGPT, has tweeted a couple of times that we are stochastic parrots. Perhaps we humans also reproduce what we hear after a probabilistic calculation. This way of diminishing human capabilities allows the alleged intelligence of machines to be inflated, the next steps for OpenAI and other companies in a sector that lives almost in a bubble. Ultimately it will allow you to raise even more money.
“The work on artificial intelligence is tied to seeing human intelligence as something simple that can be quantified and that people can be classified according to their intelligence,” says Bender. This classification allows us to establish future milestones for AI: “There is ‘artificial general intelligence’, which does not have a great definition, but it is something like that it can learn flexibly. And then there’s still ‘artificial superintelligence’, which I heard about the other day, and which must be even smarter. But it’s all imaginary.” The leap between the AIs we see today and a machine that really thinks and feels is still extraordinary.
On February 24, Altman published a post titled “Planning for AGI (Artificial General Intelligence) and beyond.” It is about “ensuring that artificial general intelligence (AI systems that are generally smarter than humans) benefits all of humanity.” bender went to twitter to wonder, among other things, who are these people to decide what benefits all of humanity.
From the get-go this is just gross. They think they are really in the business of developing/shaping “AGI”. And they think they are positioned to decide what “benefits all of humanity”. pic.twitter.com/AJxExcxDY3
— @email@example.com on Mastodon (@emilymbender) February 26, 2023
This upgrading of ChatGPT allows Altman to present his post as something almost real, with potential. “Sam Altman seems to really believe that he can build an autonomous intelligent entity. To maintain that belief, he has to take existing technology and say yes, it looks close enough to the kinds of autonomous intelligent agents he envisions. I think it is harmful. I don’t know if they believe what they’re saying or if they’re cynical, but they sound like they do,” says Bender.
If this belief that AIs do more than they seem, that they are smarter, spreads, more people will tend to accept that they slip into other decision-making spheres: “If we believe that real artificial intelligence exists, we will also be more likely to believing that of course we can make automated decision systems that are less biased than humans when in fact we can’t,” says Bender.
“Like an oil spill”
One of the most talked about possibilities for these text models is whether they will replace search engines. Microsoft, with Bing, is already trying. The various changes that have been applied to your model since it came out is proof of its difficulties. Bender wants to compare it to an “oil spill”: “That’s a metaphor that I hope will stick. One of the harms with these text synthesis machines set up as if they can answer questions is that they are going to put non-information into our information ecosystem in a way that will be hard to detect. That looks like an oil spill – it will be difficult to clean up. When companies talk about how they are constantly making progress and improving their accuracy, it’s like BP or Exxon saying, ‘Look how many birds we saved from the oil we poured on them.’
OpenAI wants to talk about the future. But I would rather talk about how we regulate what we have built now.”
While we’re talking about that improbable future, we’re not paying attention to the present, Bender says. “OpenAI wants to talk about how we make sure that AI will be beneficial to all of humanity and how we will regulate it. But I would rather talk about how we regulate what we have built now and what we need to do so that it doesn’t cause problems today, rather than this distraction from what would happen if we had these autonomous agents,” he believes.
He has not lost hope that some kind of regulation will arrive, partly because of the computational effort that these models require. “It takes a lot of resources to create one of these things and run it, which gives a little more leeway for regulation. We need a regulation around transparency. OpenAI is not being open about it. Hopefully that would help people understand better.”
Science fiction is not the only future
Bender often has to hear that she is an angry woman complaining about technology, despite running a master’s degree in computational linguistics: “I don’t feel hurt when people tell me because I know they’re wrong. Although they also show this point of view of believing that there is a predetermined path towards which science and technology take us and it is the one that we have learned from science fiction. It is a self-defeating way of understanding what science is. Science is a group of people who spread out and explore different things and then talk to each other, not people who run along a straight path, trying to be the first to the end.
Bender has one last message for those who believe that this path will be accessible and simple: “What I am going to say may be sarcastic and simplistic, but perhaps they are just waiting for us to reach a point where these models are fed with so much data that at that moment I decide spontaneously become conscious.” For now, that’s the plan.
Subscribe to continue reading
Read without limits