The June 25 issue of The Economist includes a special report on artificial intelligence. AI has been making extraordinary progress in the past few years. It’s ironic that after years of frustration with AI’s missed promises, many now worry that its mighty power is now upon us while we still don’t know how to properly deploy it. Some fear that at some future time, a sentient, superintelligent general AI might pose an existential threat to humanity. But while being dismissive of such dire concerns, many experts worry that the real threat is that AI advances could lead to widespread economic dislocation.
People have long worried about the impact of technology on society, be it railroads, electricity, and cars in the Industrial Age, or the Internet, mobile devices and smart connected products now permeating just about all aspect of our lives. The Economist reminds us that these worries have been with us ever since the advent of industrialization two centuries ago. Eminent English economist David Ricardo first raised the machinery question in 1821, that is, the “opinion entertained by the labouring class, that the employment of machinery is frequently detrimental to their interests”.
Automation anxieties continued to resurface in the 20th century, right along with accelerating technology advances. In a 1930 essay, English economist John Maynard Keynes wrote about the onset of “a new disease” which he named technological unemployment, that is, “unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” But each time those fears arose in the past, technology innovations ended up creating more jobs than they destroyed, causing the majority of economists to confidently wave away the machinery question.
Automation fears have understandably accelerated in recent years, as our increasingly smart machines are now being applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans. The concerns surrounding AI’s long term impact may well be in a class by themselves. Like no other technology, AI forces us to explore the very boundaries between machines and humans.
What impact will AI have on jobs? Could our smart machines lead to mass unemployment? What will life be like in such an AI future? “After 200 years, the machinery question is back. It needs to be answered, notes The Economist.” What can we learn from history that will help us better respond to AI’s technological advances?
MIT economist David Autor explored the lessons from history in a 2015 paper, Why Are There Still So Many Jobs? The History and Future of Workplace Automation. Dramatic declines have occurred in a number of occupations over the past 100 years. The percentage of US workers employed in agriculture has declined from 41% in 1900 to 2% in 2000. Cars drastically reduced the demand for blacksmiths and stable hands, machines have replaced many manual jobs in construction and factories, and computers have been steadily displacing large numbers of record keeping and office positions.
Given the continuing automation of so much human work over the past couple of centuries, - why are there still so many jobs left? The answer isn’t very complicated, although frequently overlooked. As Autor succinctly puts it: “tasks that cannot be substituted by automation are generally complemented by it.” Automation does indeed substitute for labor. However, automation also complements labor, raising economic outputs in ways that often lead to higher demand for workers.
Most jobs involve a number of tasks or processes. Some of these tasks are more routine in nature, while others require judgement, social skills and other human capabilities. The more routine and rules-based the task, the more amenable it is to automation. But just because some of its tasks have been automated, does not imply that the whole job has disappeared. To the contrary, automating the more routine parts of a job will often increase the productivity and quality of workers, by complementing their skills with machines and computers, as well as enabling them to focus on those aspect of the job that most need their attention.
The Economist references the work of economist James Bessen, who in a recent Atlantic article, - The Automation Paradox, - argued that “what’s happening with automation is not so simple or obvious. It turns out that workers will have greater employment opportunities if their occupation undergoes some degree of computer automation. As long as they can learn to use the new tools, automation will be their friend.”
This was the case with the weaving machines that the Luddites famously opposed in the early days of the Industrial Revolution. The automation of tasks in the weaving processes prompted workers to focus on the things the machines could not do causing output to grow explosively. “In America during the 19th century the amount of coarse cloth a single weaver could produce in an hour increased by a factor of 50, and the amount of labour required per yard of cloth fell by 98%. This made cloth cheaper and increased demand for it, which in turn created more jobs for weavers: their numbers quadrupled between 1830 and 1900. In other words, technology gradually changed the nature of the weaver’s job, and the skills required to do it, rather than replacing it altogether…”
The advent of automated teller machines (ATMs) in the 1970s is another, more recent example. By 2010, there were approximately 400,000 ATMs in the US. But, not only were bank tellers not eliminated, but their numbers actually rose modestly from 500,000 in 1980 to 550,000 in 2010. Replacing some bank employees with ATMs made it cheaper to open new branches while changing their work mix, away from routine tasks and towards tasks like sales and customer service that machines could not do.
“The same pattern can be seen in industry after industry after the introduction of computers, says Mr Bessen: rather than destroying jobs, automation redefines them, and in ways that reduce costs and boost demand. In a recent analysis of the American workforce between 1982 and 2012, he found that employment grew significantly faster in occupations (for example, graphic design) that made more use of computers, as automation sped up one aspect of a job, enabling workers to do the other parts better. The net effect was that more computer-intensive jobs within an industry displaced less computer-intensive ones. Computers thus reallocate rather than displace jobs, requiring workers to learn new skills… So far, the same seems to be true of fields where AI is being deployed.”
Since the 1980s, US job opportunities have sharply polarized. Mid-skill occupations involving routine manual (blue-collar) and cognitive (white-collar) tasks have been declining because they’re prone to automation and to outsourcing to lower-wage countries. At the same time, we’ve seen the steady growth of jobs involving non-routine, low skill manual tasks, - e.g., food and cleaning services, personal care and health care aides, - and non-routine, high skill cognitive tasks, - e.g., managerial, professional and technical occupations. A recent graph by the Federal Reserve Bank of St Louis starkly illustrates the job polarization that’s taken place over the past 30 years, and in particular, the increasing dominant role of high skill cognitive occupations.
“As with the introduction of computing into offices, AI will not so much replace workers directly as require them to gain new skills to complement it…” notes The Economist. But, “Even if job losses in the short term are likely to be more than offset by the creation of new jobs in the long term, the experience of the 19th century shows that the transition can be traumatic”. Industrialization led to major increases in productivity, income and living standards over the long run, but it took significantly longer than is often appreciated. “[D]ecades passed before this was fully reflected in higher wages. The rapid shift of growing populations from farms to urban factories contributed to unrest across Europe. Governments took a century to respond with new education and welfare systems.”
“This time the transition is likely to be faster, as technologies diffuse more quickly than they did 200 years ago. Income inequality is already growing, because high-skill workers benefit disproportionately when technology complements their jobs. This poses two challenges for employers and policymakers: how to help existing workers acquire new skills; and how to prepare future generations for a workplace stuffed full of AI.”
No one can really tell if technology will once more end up creating more jobs than it destroys, or if this time will be different and AI will end up replacing many jobs, including high skill ones, while creating few new ones. But regardless, we cannot ignore the machinery question. Even if AI doesn’t lead to mass unemployment, technological advances are already disrupting labor markets and contributing to social unrest.
How should we respond? Companies and governments need to assist workers in acquiring new skills while helping them switch jobs as needed. This includes “making education and training flexible enough to teach new skills quickly and efficiently… a greater emphasis on lifelong learning and on-the-job training, and wider use of online learning and video-game-style simulation.”
It will also require updating our social policies, perhaps along the lines of Denmark’s flexicurity system, which aims to achieve both flexibility in labor markets and security for workers, letting firms “hire and fire easily, while supporting unemployed workers as they retrain and look for new jobs. Benefits, pensions and health care should follow individual workers, rather than being tied (as often today) to employers.”
“John Stuart Mill wrote in the 1840s that ‘there cannot be a more legitimate object of the legislator’s care’ than looking after those whose livelihoods are disrupted by technology. That was true in the era of the steam engine, and it remains true in the era of artificial intelligence.”