I recently read Where is Technology Taking the Economy?, an article in McKinsey Quarterly by W. Brian Arthur, - External Professor at the Santa Fe Institute , Visiting Researcher at PARC, and former faculty member at Stanford University.
According to Arthur, the digital revolution has morphed through three distinct eras over the past several decades. The first era, in the 1970s and 1980s, brought us Moore’s law and the dramatic advances in semiconductor technologies. From mainframe and supercomputers to PCs and workstation, IT was now being used in a wide variety of applications, from financial services and oil exploration, to computer-aided design and office systems. “The economy for the first time had serious computational assistance.”
Then came the second era in the 1990s and 2000s, which enabled us to link computers together, share information, and connect digital processes. “Everything suddenly was in conversation with everything else,” giving rise to “the virtual economy of interconnected machines, software, and processes…, where physical actions now could be executed digitally.”
We’re now in the third era, which began roughly in the 2010s. It’s brought us smartphones, ubiquitous sensors, IoT devices and oceans and oceans of data. Powerful computers and intelligent algorithms are enabling us to make sense of all that data by searching for patterns and doing something with the results, including computer vision, natural-language processing, language translation, face recognition and digital assistants.
A similar conclusion was reached by author and publisher Kevin Kelly in an October, 2014 Wired article. He wrote that the AI he foresees is more like a kind of “cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off… It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ… Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization.”
To illustrate the importance of such an intelligent virtual economy, Arthur compares it to the printing revolution of the 15th and 16th centuries. Prior to the invention of the printing press by Johannes Gutenberg around 1440, information was housed in hand-written manuscripts, with very limited dissemination because of the time-consuming effort involved in copying them. Suddenly, the advent of the printing press made it possible to externalize all that information. It could now be accessed, shared and expanded upon by any reader, giving rise to an explosion of knowledge, greatly accelerating the Renaissance and Reformation and ushering the scientific revolution.
“Now we have a second shift from internal to external, that of intelligence, and because intelligence is not just information but something more powerful - the use of information - there’s no reason to think this shift will be less powerful than the first one,” writes Arthur. “We don’t yet know its consequences, but there is no upper limit to intelligence and thus to the new structures it will bring in the future.”
Companies can take advantage of these intelligent capabilities, - like face recognition, natural language processing and chatbots, - to automate existing processes. They can create new business models by stitching together different pieces of external intelligence. And, as has been the case with IT and the Internet, associative intelligence will have a major impact across the whole economy. “The components of external intelligence can’t easily be owned, they tend to slide into the public domain. And data can’t easily be owned either, it can be garnered from nonproprietary sources… if past technology revolutions are indicative, we will see entirely new industries spring up we hadn’t even thought of.”
But, as we well know, there are serious downsides to technology-based automation. Fears that machines will put humans out of work are not new. Throughout the Industrial Revolution there were periodic panics about the impact of automation on jobs, going back to the so-called Luddites, - textile workers who in the 1810s smashed the new machines that were threatening their jobs.
In a 1930 essay, English economist John Maynard Keynes wrote about the onset of “a new disease” which he named technological unemployment, that is, “unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” Keynes predicted that, assuming no catastrophic events, the standard of living in advanced economies would be so much higher by 2030 that “for the first time since his creation man will be faced with his real, his permanent problem - how to use his freedom from pressing economic cares, how to occupy the leisure,” and most people would be working a 15-hour week or so, which would satisfy their need to work in order to feel useful and contended.
Automation fears have understandably accelerated in recent years. We’re not quite at 2030, and the 15-hour week hasn’t come to pass. But Arthur believes that “we have reached the Keynes point” in the US and other advanced nations, “where indeed enough is produced by the economy, both physical and virtual, for all of us.” This is ushering a new economic era that’s not so much about production but about distribution, - how people get a reasonable share of what’s being produced. Technological unemployment is becoming a reality. “The economic challenge of the future will not be producing enough. It will be providing enough good jobs,” wrote Harvard professor and former Treasury Secretary Larry Summers in a July, 2014 WSJ article.
We’re at the very start of this new distributive economy era, notes Arthur. “Everything from trade policies to government projects to commercial regulations will in the future be evaluated by distribution. Politics will change, free-market beliefs will change, social structures will change.” This emerging distributive era will bring new economic and social realities, including:
The criteria for assessing policies will change. “The old production-based economy prized anything that helped economic growth. In the distributive economy, where jobs or access to goods are the overwhelming criteria, economic growth looks desirable as long as it creates jobs.”
The criteria for measuring the economy will also change. “GDP and productivity apply best to the physical economy and do not count virtual advances properly… GDP is the total of goods and services times their price. And very many virtual services, like email, generate unmeasured benefits for the user, cost next to nothing, and are unpriced. So when we replace priced physical services with free virtual ones, GDP falls. Productivity (GDP per worker) falls too.”
Free-market philosophy will be more difficult to support in the new atmosphere. Unregulated market behavior leads to efficiencies, but it also often leads to concentration. There will be winners and losers. “In the distributive era free-market efficiency will no longer be justifiable if it creates whole classes of people who lose.”
The new era will not be an economic one but a political one. “Workers who have steadily lost access to the economy as digital processes replace them have a sense of things falling apart, and a quiet anger about immigration, inequality, and arrogant elites… Production, the pursuit of more goods, is an economic and engineering problem; distribution, ensuring that people have access to what’s produced, is a political problem. So until we’ve resolved access we’re in for a lengthy period of experimentation, with revamped political ideas and populist parties promising better access to the economy.”
“Whether we manage a reasonable path forward in this new distributive era depends on how access to the economy’s output will be provided…” writes Arthur in conclusion. “We will also need to settle a number of social questions: How will we find meaning in a society where jobs, a huge source of meaning, are scarce? How will we deal with privacy in a society where authorities and corporations can mine into our lives and finances, recognize our faces wherever we go, or track our political beliefs? And do we really want external intelligence helping us at every turn: learning how we think, adjusting to our actions, chauffeuring our cars, correcting us, and maybe even nurturing us?…”
“All these challenges will require adjustments. But we can take consolation that we have been in such a place before… The needed adjustments will be large and will take decades. But we will make them, we always do.”