Last week I discussed a recent Pew Research report on the impact of AI, robotics and other advanced technologies on the future of jobs. The report was based on the responses of nearly 1,900 experts to a few open-ended questions, including “Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?” The expert’s responses to this question where divided down the middle.
Beyond predictions based on responses to a survey, can one develop an overall framework to help analyze these critical issues? Along this line, I like a recent paper by MIT economist David Autor, - Polanyi’s Paradox and the Shape of Employment Growth. The paper was presented at the annual Jackson Hole Federal Reserve symposium, a gathering of some of the world’s most prominent central bankers, finance experts and academics, where the theme this year was “Re-evaluating Labor Market Dynamics”. The paper carefully laid out its arguments based on existing empirical evidence. Let me summarize its key points.
Computers have made huge advances in automating many physical and cognitive human tasks, especially those tasks that can be well described by a set of rules. But, Professor Autor argues, despite continuing advances in AI and robotics, the “challenges to substituting machines for workers in tasks requiring flexibility, judgment, and common sense remain immense.”
Central to his argument is the concept of tacit knowledge, first introduced in the 1950s by scientist and philosopher Michael Polanyi. Explicit knowledge is formal, codified, and can be readily explained to people and captured in a computer program. Tacit knowledge, on the other hand, is the kind of knowledge we are often not aware we have, and is therefore difficult to transfer to another person, let alone to a machine. Generally, this kind of knowledge is best transmitted through personal interactions and practical experiences. Everyday examples include speaking a language, riding a bike, driving a car, and easily recognizing many different objects and animals.
“We can know more than we can tell,” noted Polanyi in what Autor refers to as Polanyi’s paradox. This seeming paradox succinctly captures the fact that we tacitly know a lot about the way the world works, yet are not able to explicitly describe this knowledge.
The paper builds on Autor’s earlier research on the polarization of job opportunities in the US, where he examined the changing dynamics of the US labor market by looking at three different segments:
- high skill, high wage jobs, where opportunities have significantly expanded, with the earnings of the college educated workers needed to fill such jobs rising steadily over the past thirty years;
- low skill, low wage jobs, which have also been expanding, while their wage growth, particularly since 2000, has been flat to negative;
- mid skill, mid wage jobs which have been declining, while their wage growth has also declined over the years, especially since 2000.
Many mid-skill activities involve relatively routine tasks, that is, tasks or processes that can be well described by a set of rules. They include blue-collar manual activities such as manufacturing and other forms of production, as well as white-collar, information-based activities like accounting, record keeping, dealing with simple customer service questions, and many kinds of administrative tasks. “Because the core tasks of these occupations follow precise, well understood procedures, they are increasingly codified in computer software and performed by machines,” writes Autor. “This force has led to a substantial decline in employment in clerical, administrative support and, to a lesser degree, production and operative employment.”
Low-and high-skill activities are generally non-routine in nature. Low skill activities tend to be manual tasks that cannot be described by a set of rules that a machine can follow. Jobs in this category include janitorial services, gardening, fast-food restaurant positions and health care aides. These activities are neither candidates for technology substitutions, nor can they be easily complemented with technology-based tools.
Most high-skill jobs involve expert problem solving, complex communications and other cognitive human activities for which there are no rule-based solutions. Examples include sophisticated medical diagnosis, complex designs, and many R&D tasks, as well as managing large organizations, teaching, and writing books and papers. Computers have significantly complemented and increased the productivity of these high-skill, information-intensive jobs, and have enabled them to address many new kinds of problems.
One would thus expect that non-routine jobs are growing, both high-skill, cognitive and low-skill manual ones, since they are much less amenable to technology substitution. But, the more routine mid-skill white- and blue-collar jobs are shrinking, since they are prime candidates for automation. Autor’s Federal Reserve paper presents considerable quantitative evidence that this is indeed the case, not only in the US but also in 16 European Union economies.
“At a practical level, Polanyi’s paradox means that many familiar tasks, ranging from the quotidian to the sublime, cannot currently be computerized because we don’t know the rules,” adds Autor. “At an economic level, Polanyi’s paradox means something more. The fact that a task cannot be computerized does not imply that computerization has no effect on that task. On the contrary: tasks that cannot be substituted by computerization are generally complemented by it. This point is as fundamental as it is overlooked.”
What about the future, as our machines are now being increasingly applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans? Are all workers now in danger of losing the Race Against the Machine? What can Polanyi’s paradox teach us about efforts to computerize tasks requiring flexibility, judgement and common sense? Autor discusses two major approaches that might help us computerize such tasks: environmental control and machine learning.
Environmental control essentially involves engineering the environment to make up for the many limitations of machines, while taking advantage of their many benefits. While machines find it very hard to operate in unpredictable environments, we’ve long been adapting and simplifying work environments so we can benefit from what machines are good at. Assembly lines are well known examples of adapting the factory environment in which machines operate. So are the rail tracks and smooth paved roads that enable us to use trains and cars and trucks respectively. In highly selected environments, such as moving between terminals in an airport, the trains can even be fully automated and not require a human operator. More recently, warehouses are being re-engineered so that human pickers and intelligent but limited robotic machines can better work together.
What about the promise of self-driving cars and trucks, which many believe will be all around us within a decade, but others are not so sure how fully automated they will actually be? A Google’s autonomous car, for example, requires highly detailed, curated maps for its operations, through which they then navigate using the real-time data from its sensors. If its software determines that the real environment it’s encountering is sufficiently different from its pre-specified maps, it hands over control to the human operator. “Thus,” notes Autor, “while the Google car appears outwardly to be as adaptive and flexible as a human driver, it is in reality more akin to a train running on invisible tracks.”
Environmental control holds great promise for the future, as we co-design our increasingly smart machines along with the environment in which they will operate. And, such machine-friendly environments need not resemble the more unpredictable environments that are natural for humans due to all the tacit knowledge we’ve acquired through experience.
Machine learning is an attempt to leverage all that practical experience to make an end‐run around Polanyi’s paradox. It involves the application of inductive reasoning so the machine can learn from statistical patterns in the data rather than from following explicitly programmed instructions. “Thus, through a process of exposure, training, and reinforcement, machine learning algorithms may potentially infer how to accomplish tasks that have proved dauntingly challenging to codify with explicit procedures.”
Machine learning has been successfully applied to many tasks over the past few decades. The explosive growth of big data, and the advent of data science as an exciting new discipline holds great promise for the future of machine learning and related data-driven methodologies. But, while achieving great success in many sophisticated data-intensive tasks, - e.g., healthcare, marketing, finance, - machine learning can face serious limitations in simple everyday tasks that a young child can quickly master, such as visually recognizing a chair or a cat, something we sort of learn to do without quite knowing how.
Moreover, there might be practical engineering limitations to the applications of such data-intensive applications. As a 2012 NY Times article noted, it took Google researchers 16,000 processors to teach a machine to identify a cat using machine learning principles. And the IBM Watson computer which in 2011 won the Jeopardy! Challenge, consumed 85,000 watts of power to defeat the two best human Jeopardy! players, each of whose brains consumed roughly 20 watts. While advances in technology will significantly improve the efficiency of such data-intensive applications, their commercial success might be limited by their high energy costs. When it comes to tasks requiring extensive use of tacit knowledge, we still have much to learn from biology.
“Still, the long-term potential of machine learning for circumventing Polanyi’s paradox is a subject of active debate among computer scientists,” writes Autor. “Some researchers expect that as computing power rises and training databases grow, the brute force machine learning approach will approach or exceed human capabilities. Others suspect that machine learning will only ever “get it right” on average while missing many of the most important and informative exceptions.”
Professor Autor concludes the paper concludes with a few key personal observations.
“As physical labor has given way to cognitive labor, the labor market’s demand for formal analytical skills, written communications, and specific technical knowledge has risen spectacularly. . . Thus, human capital investment must be at the heart of any long-term strategy for producing skills that are complemented rather than substituted by technology.”
“While many middle skill tasks are susceptible to automation, many middle skill jobs demand a mixtures of tasks from across the skill spectrum. . . many of the middle skills jobs that persis in the future will combine routine technical tasks with the set of non-routine tasks in which workers hold comparative advantage - interpersonal interaction, flexibility, adaptability and problem-solving.”
And finally, “the challenges to computerizing numerous everyday tasks - from the sublime to the mundane - remain substantial,. . . there is a long history of leading thinkers overestimating the potential of new technologies to substitute for human labor and underestimating their potential to complement it.”
Well done Irving. KR, MM
Posted by: Mark Montgomery | September 14, 2014 at 09:37 AM