In his 1950 seminal paper, Computing Machinery and Intelligence, Alan Turing proposed what’s famously known as the Turing test, - a test of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. If a human at a keyboard couldn’t tell whether they were interacting with a machine or a human, the machine is considered to have passed the Turing test. “Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers and entrepreneurs,” wrote Erik Brynjolfsson, - Stanford professor and Director of the Stanford Digital Economy Lab, - in a recent article, The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence.
“The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds. But not all types of AI are human-like - in fact, many of the most powerful systems are very different from humans - and an excessive focus on developing and deploying HLAI can lead us into a trap. … On the one hand, it is a path to unprecedented wealth, increased leisure, robust intelligence, and even a better understanding of ourselves. On the other hand, if HLAI leads machines to automate rather than augment human labor, it creates the risk of concentrating wealth and power. And with that concentration comes the peril of being trapped in an equilibrium where those without power have no way to improve their outcomes, a situation I call the Turing Trap.”
Over the past decade, powerful AI systems have matched or surpassed human levels of performance in a number of tasks such as image and speech recognition, applications like skin cancer classification and breast cancer detection, and complex games like Jeopardy and Go. These AI breakthroughs are generally referred to as soft, narrow or specialized AI, inspired by, but not aiming to mimic the human brain. They’ve been generally based on machine learning, that is, on the analysis of vast amounts of information using powerful computers and sophisticated algorithms, whose results exhibit qualities that we associate with human intelligence.
What do we mean by human intelligence, - the kind of intelligence HLAI aims to imitate? In 1994 the Wall Street Journal published a definition which reflected the consensus of 52 leading academic researchers in fields associated with intelligence: “Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings - ‘catching on’, ‘making sense’ of things, or ‘figuring out’ what to do.” This is a good definition of the intelligence that’s long been measured in IQ tests, and that, for the foreseeable future, only humans have.
Unlike HLAI, augmented AI aims to complement workers. Most jobs involve a number of tasks or processes. Some of these tasks are more routine in nature, while others require judgement, social skills and other human capabilities. The more routine the task, the more amenable it is to automation. But just because some of the tasks have been automated, does not imply that the whole job has disappeared. To the contrary, automating the more routine parts of a job often leads to increased productivity and higher demand for workers by complementing their skills with tools and machines, enabling them to focus on those aspect of the job that most need their attention.
“When AI augments human capabilities, enabling people to do things they never could before, then humans and machines are complements,” said Brynjolfsson. “Complementarity implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making. In contrast, when AI replicates and automates existing human capabilities, machines become better substitutes for human labor and workers lose economic and political bargaining power. Entrepreneurs and executives who have access to machines with capabilities that replicate those of human for a given task can and often will replace humans in those tasks.”
The article notes that the risks of an excessive focus on HLAI are amplified because three groups of people—technologists, businesspeople, and policymakers -find it alluring. Let me summarize the allure of HLAI to each of these groups.
Technologists. “Technologists have sought to replicate human intelligence for decades to address the recurring challenge of what computers could not do.” AI mastered checkers in the 1950s, chess in 1997, Jeopardy in 2011, and Go in 2016. But, while acknowledging the appeal of developing AI systems that replicate human tasks, - like driving cars, climbing stairs, or writing poems, - “the paradoxical reality is that HLAI can be more difficult and less valuable than systems that achieve superhuman performance.”
“The future of artificial intelligence depends on designing computers that can think and explore as resourcefully as babies do, ” wrote UC Berkeley psychologist Aliston Gopney in a 2019 WSJ essay, The Ultimate Learning Machines. AI algorithms “need enormous amounts of data, only some kinds of data will do, and they’re not very good at generalizing from that data. Babies seem to learn much more general and powerful kinds of knowledge than AIs do, from much less and much messier data. In fact, human babies are the best learners in the universe. … [they] “can learn new categories from just a small number of examples. A few storybook pictures can teach them not only about cats and dogs but jaguars and rhinos and unicorns.”
In the same paper where he introduced the Turing test, Turing wrote that the key to human intelligence was our ability to learn, and suggested that the design of a learning machine should imitate a child’s mind, not that of an adult. Such a learning machine should be based on processes similar to those of evolution by natural selection, including what babies inherited in their initial state at birth, the education they received over the years, and any other experiences that have shaped their mind into that of an adult. Since our brains have been shaped by the 300,000 or so years of Homo Sapiens evolution, and by the evolution of the brains of our earlier ancestors over many million years, it’s hard to know if we’ll ever be able to build such a learning machine.
Businesspeople. Business people “often find that substituting machinery for human labor is the low-hanging fruit of innovation, … swap in a piece of machinery for each task a human is currently doing,” said Brynjolfsson. “That mindset reduces the need for more radical changes to business processes.”
“Similarly, because labor costs are the biggest line item in almost every company’s budget, automating jobs is a popular strategy for managers.” But, while cutting costs is often easier than expanding markets, most economic value comes from innovative new goods and services. More than 60% of today’s jobs hadn’t even been invented in 1940. “In short, automating labor ultimately unlocks less value than augmenting it to create something new.”
Policymakers. “The first rule of tax policy is simple: you tend to get less of whatever you tax. Thus, a tax code that treats income that uses labor less favorably than income derived from capital will favor automation over augmentation.”
US taxes on labor income are now significantly higher than taxes on capital income. While the top tax rates on both kinds of income were the same in 1986, successive changes have created a large disparity. In 2021, the top marginal rates on labor income is 37%, “while long capital gains have a variety of favorable rules, including a lower statutory tax rate of 20 percent, the deferral of taxes until capital gains are realized, and the ‘step-up basis’ rule that resets capital gains to zero, wiping out the associated taxes, when assets are inherited.”
“More and more Americans, and indeed workers around the world, believe that while the technology may be creating a new billionaire class, it is not working for them,” wrote Brynjolfsson in conclusion. “The more technology is used to replace rather than augment labor, the worse the disparity may become, and the greater the resentments that feed destructive political instincts and actions. More fundamentally, the moral imperative of treating people as ends, and not merely as means, calls for everyone to share in the gains of automation. The solution is not to slow down technology, but rather to eliminate or reverse the excess incentives for automation over augmentation. … By redirecting our efforts, we can avoid the Turing Trap and create prosperity for the many, not just the few.”
Comments