“A recent Gallup poll found that 75% of U.S. adults believe AI will lead to fewer jobs,” wrote MIT economist David Autor in a recent article, “AI Could Actually Help Rebuild The Middle Class.” “But this fear is misplaced. The industrialized world is awash in jobs, and it’s going to stay that way. Four years after the Covid pandemic’s onset, the U.S. unemployment rate has fallen back to its pre-Covid nadir while total employment has risen to nearly three million above its pre-Covid peak.”
“Due to plummeting birth rates and a cratering labor force, a comparable labor shortage is unfolding across the industrialized world (including in China),” he added. “This is not a prediction, it’s a demographic fact. All the people who will turn 30 in the year 2053 have already been born and we cannot make more of them. Barring a massive change in immigration policy, the U.S. and other rich countries will run out of workers before we run out of jobs.”
In his article, Autor argues that AI has the potential to transform the labor market by reshaping the nature of human expertise. AI offers us the opportunity to extend the value of human expertise by enabling “a larger set of workers equipped with the necessary foundational training to perform higher-stakes decision-making tasks currently arrogated to elite experts, such as doctors, lawyers, software engineers and college professors.”
Expertise is the domain-specific knowledge or competency required to accomplish a particular goal. Expertise commands high wages if the goal it enables is both necessary and relatively scarce, e.g., physicians, engineers, and lawyers. In contrast, jobs that require little expertise and training generally command low wages, e.g., waiters, janitors, and school crossing guards.
The article examined the historical evolution of expertise across three different eras: the industrial era of the 19th and early 20th century, the computer and digital era of the past several decades, and the emerging AI era.
The industrial era
Prior to the industrial revolution of the late 18th century, expertise was artisanal in nature, that is, goods were handmade by skilled craft workers. “Although artisanal expertise was revered, its value was ultimately decimated by the rise of mass production in the 18th and 19th centuries,” wrote Autor. “Mass production meant breaking the complex work of artisans into discrete, self-contained and often quite simple steps that could be carried out mechanistically by a team of production workers, aided by machinery and overseen by managers with higher education levels.”
“As the tools, processes and products of modern industry gained sophistication, demand for a new form of worker expertise — mass expertise — burgeoned. Workers operating and maintaining complex equipment required training and experience in machining, fitting, welding, processing chemicals, handling textiles, dyeing and calibrating precision instruments, etc. Away from the factory floor, telephone operators, typists, bookkeepers and inventory clerks, served as information conduits — the information technology of their era.”
The training for this new kind of industrial and information expertise required both literacy, that is, the ability to read and write, as well as the ability to understand and apply simple numerical concepts. Hence, the early 20th century saw a major expansion of high school education. By 1940, 50% of young adults in the US had earned a high school diploma.
Mass expertise was narrowly defined in the industrial era. Expert judgement wasn’t needed or even wanted from workers in assembly lines or offices. “As a result, the narrow procedural content of mass expert work, with its requirement that workers follow rules but exercise little discretion, was perhaps uniquely vulnerable to technological displacement in the era that followed.”
The computer era
The second half of the 20th century saw the emergence of the digital computer era, aka the information age. A wide variety of scientific and business applications could now be precisely represented as software programs. Over the following decades, the exponential advances in the performance and price-performance of digital technologies, — i.e., Moore’s Law, — significantly increased the use of computers across our increasingly digital economy. Computers automated a large share of the mass expertise, routine tasks of the industrial era, replacing many mid-skill production and clerical workers or forcing them into lower skills, lower-pay jobs.
At the same time, sophisticated computer tools enhanced the productivity and value of jobs that required the kind of expert problem solving and complex communications skills typically seen in managerial, professional and technical occupations. While being beyond the scope of computer automation, most such jobs have been complemented by advanced computer tools. Since the 1980s, high-skill jobs requiring elite expertise have significantly expanded, with the earnings of the highly educated workers needed to fill such jobs rising steadily.
The AI era
“Like the Industrial and Computer revolutions before it, Artificial Intelligence marks an inflection point in the economic value of human expertise,” wrote Autor. “To appreciate why, consider what distinguishes AI from the computing era that we’re now leaving behind. Pre-AI, computing’s core capability was its faultless and nearly costless execution of routine, procedural tasks. Its Achilles’ heel was its inability to master non-routine tasks requiring tacit knowledge. Artificial Intelligence’s capabilities are precisely the inverse.”
Artificial Intelligence was born as an academic discipline in the mid-1950s. The field’s founders aspired to develop machines capable of using language, forming abstractions and concepts, and solving the kinds of problems that required human intelligence. They believed that just about every aspect of human intelligence could in principle be precisely expressed as software and executed in increasingly powerful computers, — as was the case with scientific and business applications in those early days of the computer era.
Many of the AI leading researchers in the ’60s and ’70 were convinced that you could develop AI systems capable of human-like cognitive capabilities within a generation, and obtained considerable government funding to implement their vision. But eventually it became clear that all these various projects had grossly underestimated the difficulties of developing machines exhibiting human-like intelligence, because in the end, you cannot express as software barely understood cognitive capabilities like language, thinking, or reasoning. After years of unfulfilled promises and hype, these ambitious AI approaches were abandoned in the 1980s, and a so called AI winter of reduced interest and funding set in that nearly killed the field.
AI was reborn in the 1990s. Instead of trying to program human-like intelligence, the field embraced a statistical, brute force approach based on searching for patterns in vast amounts of data with highly parallel supercomputers and sophisticated algorithms, — an approach widely used in scientific applications like high energy physics, theoretical astronomy, and computational genomics. AI researchers discovered that such an information-based approach produced something akin to intelligence or knowledge. Moreover, unlike the earlier programming-based projects, the statistical approaches scaled very nicely. The more information you have, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results.
AI is fundamentally a data-centric discipline, — a kind of software 2.0. The centrality of data is the common element in the key technologies that have advanced AI over the past few decades, including big data and analytics in the 2000s, machine and deep learning in the 2010s, and more recently foundation models, LLMs, and generative AI. “In a case of cosmic irony, AI is not trustworthy with facts and numbers — it does not respect rules,” noted Autor. “AI is, however, remarkably effective at acquiring tacit knowledge. Rather than relying on hard- coded procedures, AI learns by example, gains mastery without explicit instruction and acquires capabilities that it was not explicitly engineered to possess.”
“AI’s capacity to depart from script, to improvise based on training and experience, enables it to engage in expert judgment — a capability that, until now, has fallen within the province of elite experts. Though only in its infancy, this is a superpower. As AI’s facility in expert judgment becomes more reliable, incisive and accessible in the years ahead, it will emerge as a near-ubiquitous presence in our working lives. Its primary role will be to advise, coach and alert decision-makers as they apply expert judgment.”
The promise of AI
In his 2010 article, The Polarization of Job Opportunities in the US Labor Market, professor Autor said that the US labor market was facing two critical challenges. Since the 1980s, US education levels have not kept up with the rising demand for the judgement and decision-making capabilities of high skilled workers, thus resulting in a sharp rise in the earnings and employment opportunities for highly educated professionals, e.g., doctors, lawyers, university professors, and computer scientists. At the same time, we’ve seen contracting opportunities and earnings for mid-wage, mid-skill white- and blue-collar jobs, whose procedural expertise can be described by a set of rules and have thus been prime candidates for technology substitution as well as for offshoring to lower-cost countries.
AI technologies have the potential to address these serious challenges by augmenting the procedural knowledge of less expert workers, thus enabling them to perform tasks requiring expert decision-making capabilities. “By providing decision support in the form of real-time guidance and guardrails, AI could enable a larger set of workers possessing complementary knowledge to perform some of the higher-stakes decision-making tasks currently arrogated to elite experts like doctors, lawyers, coders and educators. This would improve the quality of jobs for workers without college degrees, moderate earnings inequality, and — akin to what the Industrial Revolution did for consumer goods — lower the cost of key services such as healthcare, education and legal expertise.”
Autor illustrates this point with the concrete example of the job of Nurse Practitioner (NP). In the US, NPs must have an advanced nursing degree that certifies them to perform a variety of health services that were previously the domain of physicians, such as ordering and interpreting diagnostic tests, formulating treatment plans, and prescribing medications. “Moving forward, AI could ultimately supplement the expert judgment of NPs engaging in a broader scope of medical care tasks.”
“For the lucky among us, work provides purpose, community and veneration,” wrote Autor in conclusion. “But the quality, dignity and respect of a substantial minority of jobs has eroded over the past four decades as computerization has marched onward and inequality has grown more prevalent.”
“The unique opportunity that AI offers humanity is to turn back this tide — to extend the relevance, reach and value of human expertise for a larger set of workers. Not only could this dampen earnings inequality and lower the costs of key services like healthcare and education, but it could also help restore the quality, stature and agency that has been lost to too many workers and jobs.”
“This alternative path is not an inevitable or intrinsic consequence of AI development. It is, however, technologically plausible, economically coherent and morally compelling. Recognizing this potential, we should ask not what AI will do to us, but what we want it to do for us.”
Comments