“When OpenAI unleashed its humanlike ChatGPT software on the world last year, one thing was clear: These AI systems are coming for our jobs. But don’t write off the humans just yet.” said “The New Jobs for Humans in the AI Era,” a WSJ article published on October 5. As has been the case with major technologies over the past two centuries, — e.g., steam power, electricity, the internet, — AI will both threaten and lead to the creation of new jobs. While still in the early stages, “AI is already creating new opportunities,” said the article, mentioning a few of those new opportunities.
I found two of those new jobs particularly intriguing: prompt engineer and AI psychotherapist. Generative AI and large language models (LLM) are a new kind of human-like alien intelligence that even the people building them don’t really understand how they work. Researchers are attempting to unlock the AI black box in hopes of understanding how to best work with these powerful, alien technologies.
Let me discuss each of these two jobs.
Prompt Engineer
Wikipedia defines prompt engineering as “the process of structuring text that can be interpreted and understood by a generative AI model. A prompt is natural language text describing the task that an AI should perform. … Prompt engineering may involve phrasing a query, specifying a style, providing relevant context or assigning a role to the AI such as ‘Act as a native French speaker’”.
Prompt engineers require multiple skills, said the WSJ article. “Prompt engineering is an emerging class of job that is nestled somewhere between programming and management. Instead of using complicated computer programming languages like Python or Java, prompt engineers will spell out their instructions to AI systems in plain English, creating new ways of harnessing the power of the underlying AI systems.”
Linguists have long been studying the inherent nature of natural language understanding (NLU). As I learned in a 2020 paper “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data” by linguistic professors Emiliy Bender and Alexander Koller, claims in academic and popular publications that AI models truly understand or comprehend natural language are overclaims caused by a misunderstanding of the relationship between linguistic form and meaning. The article noted that while LLMs are innovative language tools, — kind of like highly advanced spell checkers or word processors, — it dismissed the claims that they have the ability to reason and understand the meaning of the language they’re generating.
LLMs have been trained with huge amounts of text and speech, from which they’re able to learn the syntax or expressive form of language, such as how words, morphemes, and grammatical rules combine to form phrases and sentences. However, LLMs are unable to capture communicative intent, — the purpose intended to be achieve through language, i.e., to convey information to another person. Conveying meaning through language, and evoking communication intent in the reader or listener require a knowledge of the physical and social world around us. Despite their increasing fluency, the text generated by an LLM or chatbot cannot possibly carry any communicative intent, model of the world, or model of the reader’s state of mind because that’s not what they were trained to do.
“Dissociating Language and Thought in Large Language Models: a Cognitive Perspective,” a paper published in January of 2023, nicely explains how cognitive science and neuroscience can help us understand the potential capabilities of LLMs and chatbots. The paper points out that there’s a tight relationship between language and thought in humans. When we hear or read a sentence, we typically assume that it was produced by a rational person based on their real world knowledge, critical thinking, and reasoning abilities. We generally view other people’s statements not just as a reflection of their linguistic skills, but as a window into their mind.
The paper explained the difference between the linguistic competence required to produce and comprehend language, and the non-language specific cognitive functions that are required when we use language in concrete, real-world situations. Research on the functional architecture of the human brain has established that “the machinery dedicated to processing language is separate from the machinery responsible for memory, reasoning, and social skills. Based on this distinction, LLMs and chatbots are very promising in one piece of the human cognitive toolbox — formal language processing — but fall short, at least so far, in their ability to model human thought.
Prompt engineering is now considered one of the hottest tech jobs as companies look to get the most out of LLMs while avoiding incorrect or inappropriate results. As the WSJ article noted: “The best prompt engineers are people who can give very clear instructions, but who also understand the principles of coding. In other words, they’re often great technical managers. Except with prompt engineers, it’s not an employee that they’re managing, — It’s an AI.”
AI Psychotherapist
What does it mean to be an AI psychotherapist? According to the WSJ article: “AI psychotherapists will evaluate a model’s upbringing, by scrutinizing its training data for errors and sources of bias.” Let’s discuss.
“How to worry wisely about artificial intelligence” was the overall theme of the April 22 issue of The Economist, with several articles on the subject. “Rapid progress in AI is arousing fear as well as excitement. How worried should you be?,” asked the issue’s lead article. Proponents of AI argue that AI has now emerged as one of, if not the key defining technology of the 21st century, with the potential to help us address and solve big problems. But others believe that a rapidly advancing, out-of-control, super-intelligent AI poses an existential threat to humanity.
The Economist issue included a very interesting essay, “How AI could change computing, culture and the course of history.” The essay notes that: “A technology need not be world-ending to be world-changing.” To give us a sense of the world-changing transformations we might expect from generative AI, the essay cites three historical analogues: the browser, the printing press and Sigmund Freud’s psychoanalytic theories.
The printing press analogy is fairly straightforward. The printing press, invented by Johannes Gutenberg around 1440, accelerated the spread of knowledge and literacy in Renaissance Europe and influenced almost every facet of life in the centuries that followed. The very breadth of the printing press makes comparison with LLMs almost unavoidable. Printed books have significantly expanded the knowledge we’ve all had access to, helping us generate much more knowledge and new kinds of disciplines. Similarly, LLMs trained on a given body of knowledge, can derive and generate all kinds of additional knowledge.
The brower analogy is equally straightforward. The internet and World Wide Web have enabled access to a huge variety of digital information and applications to anyone with a personal computer and an internet connection. The browser soon became the gateway to the fast growing Web. Companies and public-sector institutions were thus able to engage in their core activities in a much more productive way. Chatbots like ChatGPT might now become a new kind of conversational interface to information and applications: type a prompt and see the results.
The third major change discussed in the essay, Freud’s psychoanalytic theories, requires further explanation. “To accept that human-seeming LLMs are calculation, statistics and nothing more could influence how people think about themselves.” But so far, we’re not even masters of the LLMs and chatbots we’ve created. AI researchers can explain how the mathematical algorithms underlying deep neural networks work, but are unable to explain, in terms a human would generally understand, how those algorithms arrived at a specific answer to our questions. In other words, we really don’t know how they work.
“This raises two linked but mutually exclusive concerns: that AI’s have some sort of internal working which scientists cannot yet perceive; or that it is possible to pass as human in the social world without any sort of inner understanding.”
Freud realized that the conscious mind was not the only driver of human behaviors. There was another driver, the unconscious mind, that exists beneath the surface of conscious awareness that can exert a strong influence on our overall emotions and actions. One doesn’t have to subscribe to Freudian explanations of human behavior to agree that people do things of which they’re not conscious.
While the unconscious mind may not be a satisfactory model to help explain how LLMs work, the sense that there’s something below the AI surface which needs understanding is pretty powerful. But, if our lifeless LLMs and chatbots continue to exhibit increasingly human-like behaviors, and we still don’t understand the drivers of such behavior, “then it will be time to do for AI some of what Freud thought he was doing for humans.”
AI psychotherapists “may put AI models on the couch, by probing them with test questions,” said the WSJ article. “Companies such as IBM, Google and Microsoft are racing to release new tools that quantify and chart an AI’s thought processes, but like Rorschach tests they require people to interpret their outputs. Understanding an AI’s reasoning will only be half the job; …The other half will be signing off on a model’s mental fitness for the task at hand,” because, as the article reminds us: “No matter how sophisticated the models and systems get, … we as humans are ultimately responsible for the outcomes of the use of those systems.”
Comments