Can technology plan economies and destroy democracy? is the provocative title of an essay in the end-of-2019 issue of The Economist. “The success of market- and semi-market-based economies… has made the notion of a planned economy seem like a thing of the past.” But, given all the data that can now be gathered by billions of smartphones and 10s of billions of sensors, - with many more to come, - could our increasingly intelligent and powerful AI systems “replace the autonomous choices on which the market is based?… And if technology can outperform the invisible hand in the economy, might it be able to do the same at the ballot box when it comes to politics?”
As we’ve already seen, AI applications “can provide not just data on what people want and will tolerate, but also the means to manipulate those desires. When such means are available to actors within or outside a state the struggle to gain, or for that matter retain, a democracy which reflects a genuine popular will might become even harder than it is.” Could this all mean that free-market economics and liberal democracy will become obsolete over the coming century?
These questions were often raised and repeatedly tested throughout the 20th century, as various governments attempted to redesign their economies and societies in accord with what were believed to be scientific laws. But, most such schemes, especially those carried out by authoritarian states, ended up as complete failures. And some went tragically awry, - like Stalin’s collectivization of agriculture in the Soviet Union and Mao’s Great Leap Forward in China, - both of which brought death and disruption to millions.
It wasn’t just in authoritarian states where large, expensive and well intentioned social engineerings programs ended up worsening the conditions they aimed to correct. In the mid 20th century, a number of American cities started large scale urban renewal projects as an attempt to redevelop older, rundown sections of inner cities and replace them with highways and low-income housing projects. Robert Moses redevelopment of large sections of New York City was a prominent example, as was the Cabrini housing project in Chicago.
But, starting in the 1960s, opposition to urban renewal projects started to grow. As Jane Jacobs wrote in The Death and Life of Great American Cities, many of these projects were responsible for the decline of well-functioning city neighborhoods. Over the years, crime and neglect led to deplorable conditions in a number of housing projects like Chicago’s Cabrini homes, which were eventually demolished in the 2000s.
Given the collapse of the Soviet Union in 1991, and China’s decision to embrace markets in much of its economy, “[t]he market side of the debate seemed conclusively proved right,” noted The Economist. “Some saw the Soviet collapse as making an allied point about politics; that decentralised freedom worked better.” These grand, centrally managed schemes to improve the human condition failed because they aimed to replace the highly complex interdependencies of social life and markets with abstract scientific knowledge. “With the tools at their disposal today, planners and controllers would seem to have no hope of competing with organically grown information-processing systems such as markets and democracy.”
But, could things be different in age of AI? Given the proliferation of mobile and IoT devices, the next few decades promise to make information as ubiquitous as electricity. The amount and variety of data gathered around the world will continue to grow by leaps and bounds, as will the power and sophistication of the computers and algorithms used to analyze all that data. “Yet none of this means either efficient or effective planning is possible in the near term, or perhaps ever,” concludes The Economist. “Democratic and market processes act to even out human fallibility and explore all sorts of possibilities. Planned dictatorships narrow choices and amplify error.”
Last century’s rivalry with the Soviet Union and its communist ideology is now being replaced with a rivalry with China and its AI-based central planning. How will such AI-based planning likely work out? The Economist essay references the work of George Washington University professor Henry Farrell, who explored this question in a recent article, Seeing Like a Finite State Machine.
The collective wisdom that’s emerging in Washington DC and other capitals is that “China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism,” writes Farrell. “Authoritarianism has always been plagued with problems of gathering and collating information and of being sufficiently responsive to its citizens’ needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning will solve the basic authoritarian dilemma… Authoritarianism then, can emerge as a more efficient competitor that can beat democracy at its home game.”
“The theory behind this is one of strength reinforcing strength.” The strengths of ubiquitous data gathering and analysis reinforcing the strengths of central planning, where a well-organized management hierarchy can now make AI-based decisions. “Yet there is another story to be told - of weakness reinforcing weakness.” According to Farrell, central planning based on machine learning must overcome two serious challenges.
First, while machine learning can be applied to just about any domain of knowledge, its methods are most applicable to significantly narrower and more specialized problems than those that humans are capable of handling, and there are many tasks for which machine learning is not effective. In particular, as we’re frequently reminded, correlation does not imply causation.
Machine learning is a statistical modelling technique, like data mining and business analytics, which finds and correlates patterns between inputs and outputs without necessarily capturing their cause-and-effect relationships. It excels at solving problems in which a wide range of potential inputs must be mapped onto a limited number of outputs; large data sets are available for training the algorithms; and the problems to be solved closely resemble those represented in the training data, e.g., image and speech recognition, language translation. But, deviations from these assumptions can lead to poor results. This is clearly the case when attempting to apply machine learning to highly complex and open-ended problems like markets and human behavior.
The second major challenge is that machine learning can serve as a magnifier for existing errors and biases in the data. Garbage in, garbage out applies as much to AI today as it has to computing since its early years. Given that AI algorithms are trained using the vast amounts of data collected over the years, if the data include past racial, gender, or other biases, the predictions of these AI algorithms will reflect these biases. “When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.” In more open free market, democratic societies there will always be ways for people to point out and counteract these biases, but in more centrally managed, autocratic societies the correction tendencies will be weaker.
“In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors,” writes Farrell in conclusion.
Comments