“How to worry wisely about artificial intelligence” was the overall theme of the April 22 issue of The Economist, with several articles on the subject. “Rapid progress in AI is arousing fear as well as excitement. How worried should you be?,” said the issue’s lead article. “In particular, new ‘large language models’ (LLMs)—the sort that powers Chatgpt, a chatbot made by Openai, a startup—have surprised even their creators with their unexpected talents as they have been scaled up.”
“Proponents of AI argue for its potential to solve big problems by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power. To others, the fact that AIs’ capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of the machine that outsmarts its inventor, often with fatal consequences.”
The Economist issue included an essay, “How AI could change computing, culture and the course of history,” by European business editor Ludwig Siegele in close collaboration with briefings editor Oliver Morton. I found the essay quite interesting, and recognized that it had been written by Siegele, with whom I’ve long been discussing advanced technologies given his previous positions as technology correspondent for The Economist. Let me summarize the essay’s key points.
Over the past few decades, there have been multiple points of view on the long term impact of AI. At one end are those who believe that a technological singularity will be reached some time in the future by an exponentially advancing superintelligence that would eventually pose an existential risk to humans.
At the other end are those who believe that increasingly sophisticated AI-based tools will help us process vast amounts of information and better address ever more complex problems. For example, in a 2014 Wired article, Kevin Kelly wrote that AI will become a kind of “cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. … Everything that we formerly electrified we will now cognitize. … Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization.”
Siegele’s essay reminds me of Kelly’s article. “The remarkable boom in the capabilities of large language models (LLMs), foundational models and related forms of generative AI has propelled these discussions of existential risk into the public imagination and the inboxes of ministers,” the essay notes. And while agreeing that these powerful new models bring clear and present dangers which should be addressed, “It is hard to imagine them underpinning ‘the power to control civilisation’, or to ‘replace us’, as hyperbolic critics warn.”
But, the essay adds: “A technology need not be world-ending to be world-changing.” To give us a sense of the world-changing transformations we might expect from generative AI, the essay cites three historical analogues: the browser, the printing press and Sigmund Freud’s psychoanalytic theories. Let me briefly discuss how each of these historical changes might shed light on AI’s long term impact.
The browser
The universal reach and connectivity of the internet and World Wide Web ushered our 21st century digital economy by enabling access to a huge variety of information and applications to anyone with a personal computer, an internet connection and a browser. Companies and public-sector institutions were thus able to engage in their core activities in a much more productive way.
“The humble web browser, introduced in the early 1990s as a way to share files across networks, changed the ways in which computers are used, the way in which the computer industry works and the way information is organised,” said the essay. The browser soon became the gateway to information and applications in the fast growing Web, leading to the so-called browser wars as different companies competed to develop proprietary browsers, whose incompatible features didn’t work on all website. Finally, in 1995, all browser developers agreed to the standards set by the World Wide Web Consortium (W3C).
Chatbots like ChatGPT might now become a new kind of conversational interface to information and applications: type a prompt and see the results. Over time, the ability of LLMs to help with software development might even allow them to generate code on the fly. “The capacity to translate from one language to another includes, in principle and increasingly in practice, the ability to translate from language to code. A prompt written in English can in principle spur the production of a program that fulfils its requirements.” Such a code-as-a-service innovation could become a game-changer “in both the way people use computers and the business models within which they do so
Browsers provide a way to access and interact with content that’s mostly produced by humans. LLMs, however, generate their own content, and while they’re quite good at generating text, speech, images, and videos given a few prompts, they have no mechanisms for checking the truth of what they generate. The generated sentences might thus be linguistically plausible, but may in fact be incorrect or nonsensical. “They create things which look like things in their training sets; they have no sense of a world beyond the texts and images on which they are trained.”
“In many applications a tendency to spout plausible lies is a bug.” But, warns the essay, “For some it may prove a feature. … Expect the models to be used to set up malicious influence networks on demand, complete with fake websites, Twitter bots, Facebook pages, TikTok feeds and much more.”
The essay references a March 2021 paper that coined the term stochastic parrots to describe the behavior of LLMs. Its authors argued that LLMs are merely remixing the enormous number of human-authored sentences used in their training. Their impressive ability to generate cogent, articulate sentences gives us the illusion that we’re dealing with a well educated and intelligent human, rather than with what’s essentially a stochastic parrot that has no human-like understanding of the sentences it’s generating.
The printing press
The printing press, invented by Johannes Gutenberg around 1440, accelerated the spread of knowledge and literacy in Renaissance Europe. Gutenberg’s printing revolution influenced almost every facet of life in the centuries that followed, starting with the Protestant Reformation which leveraged the printing press to undermine the Catholic Church monopoly in information dissemination.
The very breadth of the printing press makes comparison with LLMs almost unavoidable. Printed books have significantly expanded the knowledge we’ve all had access to, helping us generate much more knowledge and new kinds of disciplines. Similarly, LLMs trained on a given body of knowledge, can derive and generate all kinds of additional knowledge.
“As a way of presenting knowledge, LLMs promise to take both the practical and personal side of books further, in some cases abolishing them altogether. An obvious application of the technology is to turn bodies of knowledge into subject matter for chatbots. Rather than reading a corpus of text, you will question an entity trained on it and get responses based on what the text says. Why turn pages when you can interrogate a work as a whole?”
Chatbots are already being developed to help users interact with specific kinds of knowledge. The essay mentions that Bloomberg recently introduced BloombergGPT, a “50-billion parameter large language model, purpose-built from scratch for finance.” BibleGPT, an AI chatbot trained on the Bible aims to give advice on important questions in life. So does QuranGPT for those seeking guidance and insights into the teachings of Islam. “Meanwhile several startups are offering services that turn all the documents on a user’s hard disk, or in their bit of the cloud, into a resource for conversational consultation.”
Freud and AI: What does it mean to be human?
The third major change discussed in the essay is particularly intriguing. “To accept that human-seeming LLMs are calculation, statistics and nothing more could influence how people think about themselves.”
“Freud portrayed himself as continuing the trend begun by Copernicus — who removed humans from the centre of the universe — and Darwin — who removed them from a special and God-given status among the animals. Psychology’s contribution, as Freud saw it, lay in ‘endeavouring to prove to the ego of each one of us that he is not even master in his own house.’”
And so far, we’re not even masters of the LLMs and chatbots we’ve created. AI researchers can explain how the mathematical algorithms underlying deep neural network work, but are unable to explain, in terms a human would generally understand, how those algorithms arrived at a specific recommendation. In other words, we don’t really know how they work.
“This raises two linked but mutually exclusive concerns: that AI’s have some sort of internal working which scientists cannot yet perceive; or that it is possible to pass as human in the social world without any sort of inner understanding.”
Freud realized that the conscious mind was not the only driver of human behaviors. There was another driver, the unconscious mind, that exists beneath the surface of conscious awareness that can exert a strong influence on our overall emotions and actions. One doesn’t have to subscribe to Freudian explanations of human behavior to agree that people do things of which they’re not conscious.
While the unconscious mind may not be a satisfactory model to help explain how LLMs work, the sense that there’s something below the AI surface which needs understanding is pretty powerful. But, if our lifeless LLMs and chatbots continue to exhibit increasingly human-like behaviors, and we still don’t understand the drivers of such behavior, “then it will be time to do for AI some of what Freud thought he was doing for humans,” wrote Siegele in conclusion.
“And human desires may need some inspection, too,” he added. “Why are so many people eager for the sort of intimacy an LLM might provide? Why do many influential humans seem to think that, because evolution shows species can go extinct, theirs is quite likely to do so at its own hand, or that of its successor? And where is the determination to turn a superhuman rationality into something which does not merely stir up the economy, but changes history for the better?”
Comments