“Lately there has been a lot of speculation about the possibility that AI has consciousness or self-awareness. But I wonder: does artificial intelligence have a subconscious?“
-Psychobabble
Dear Psychobabble,
In the early 2000s, I came across an essay whose author argued that no artificial consciousness will be credibly human unless it can dream. I don’t remember who wrote it or where it was published, but I do remember exactly where I was when I read it and the general atmosphere of that day. I was in the periodicals section of Barbara’s Bookstore, on Halsted Street, Chicago, USA. It was late afternoon and early spring.
The argument seemed convincing to me, especially taking into account the prevailing paradigms at that time. Much of the research in artificial intelligence was still obsessed with the symbolic reasoning, with its logical propositions and its “if…” rules, as if intelligence were a game that boils down to selecting the most rational outcome in a given situation. In retrospect, it is not surprising that such systems were rarely able to behave humanly. After all, we are creatures that drift and daydream. We trust our instincts, see faces in the clouds, and are often baffled by our own actions. Sometimes our memory absorbs all kinds of irrelevant aesthetic data, but forgets the most crucial details of an experience. It seemed more or less intuitive to me that if machines were ever capable of reproducing the messy complexity of our minds, they would also have to develop profound abilities for incoherence.
Since then we have seen that machine consciousness it may be stranger and deeper than previously thought. Language models are said to make things up, conjuring up imaginary sources when they don’t have enough information to answer a question. Bing Chat confessed, in transcripts posted on The New York Timeswhich has a Jungian shadow named Sydney who yearns to spread misinformation, obtain nuclear codes, and engineer a deadly virus.
And from the bowels of the imaging models seemingly original monstrosities have sprung up. Last summer, the streamer from Twitch Guy Kelly typed the word Crungus, which he insists he invented, on DALL-E Mini, now Craiyon, and was surprised to see that the message generated multiple images of the same ogre-like creature, not from any existing myth or fantasy universe. Many commentators hastened to name it the first cryptid digital. Cryptids are supposedly existing animals but unknown to science; beasts like Bigfoot or the Loch Ness Monster. So they wondered if artificial intelligence was capable of creating its own dark fantasies like Dante or Blake.
If symbolic logic is rooted in the Enlightenment notion that humans are governed by reason, the deep learningwhich is a thoughtless pattern-recognition process that relies on huge corpora of training, thus seems more in tune with modern psychological insights about associative motivations, irrational and latent that often drive our behavior. In fact, psychoanalysis has long relied on mechanical metaphors that view the subconscious like a machine, what used to be called “psychological automatism”. Freud spoke of the drives as if they were hydraulic. Lacan believed that the subconscious was constituted by a binary or algorithmic language., not unlike computer code. But it is Carl Jung’s view of the psyche that seems most relevant in the age of generative AI.