“Humans have a good track record of innovation,” wrote Arjun Ramani and Zhengdong Wang in an essay which systematically explains “Why transformative artificial intelligence is really, really hard to achieve.” “The mechanization of agriculture, steam engines, electricity, modern medicine, computers, and the internet — these technologies radically changed the world. Still, the trend growth rate of GDP per capita in the world's frontier economy has never exceeded three percent per year.”
“Yet many people are optimistic that artificial intelligence is up to the job. AI is different from prior technologies, they say, because it is generally capable — able to perform a much wider range of tasks than previous technologies, including the process of innovation itself. Some think it could lead to a Moore’s Law for everything, or even risks on a par with those of pandemics and nuclear war.”
The essay consists of three sections: the transformational potential of AI is constrained by its hardest problems; major technical hurdles remain; and social and economic hurdles may limit AI’s impact. Let me summarize a few of the key arguments in each section.
The transformational potential of AI is constrained by its hardest problems
Rather than defining AI as a system that’s as good or better than humans at a wide variety of economically valuable tasks, Ramani and Wang define transformative AI in terms of its economic impact as measured by productivity growth when performing useful work.
“A powerful AI could one day perform all productive cognitive and physical labor. If it could automate the process of innovation itself, some economic growth models predict that GDP growth would not just break three percent per capita per year — it would accelerate.” However, the authors point out that this would be very hard to achieve.
in the mid-1960s, economist William Baumol described what’s become known as the Baumol effect: even if some sectors of the economy experience high productivity growth, the overall productivity growth will be constrained by its weaker sectors.
To illustrate the concept, “consider a simple economy with two sectors, writing think-pieces and constructing buildings. Imagine that AI speeds up writing but not construction. Productivity increases and the economy grows. However, a think-piece is not a good substitute for a new building. So if the economy still demands what AI does not improve, like construction, those sectors become relatively more valuable and eat into the gains from writing. A 100x boost to writing speed may only lead to a 2x boost to the size of the economy.”
Over the past few decades, productivity growth has generally followed the Baumol effect. Productivity has significantly increased in manufacturing, driving down the prices and raising the quality of TVs and other consumer goods. Similarly, the IT industry has seen remarkable technology advances, which have significantly improved the price performance of computers and personal devices. At the same time, the prices of labor-intensive services like healthcare, education, and child case have significantly gone up, as have housing prices, bringing down the aggregate productivity growth.
Despite rapid progress in some AI subfields, major technical hurdles remain
Progress in fine motor control has hugely lagged progress in neural language models. In 1988, Hans Movarec observed what’s become known as Movarec’s paradox: “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” Similarly, Steven Pinker wrote in 1994 that “the main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.” AI’s recent advances with hard cognitive tasks are truly impressive, but progress in robotics with relatively simple physical tasks remain far behind.
The list of open research problems relevant to transformative AI continues. The essay cites a number of open research problems. Embodied cognition is such an open problem, namely the conjecture that cognition and having a body are inseparable, because cognition depends upon the kinds of experiences that come from having a body. Another important problem is the theory that instead of viewing intelligence as a collection of task-specific skills, intelligence should be viewed as the ability to acquire new skills through learning, an idea that had been presciently suggested by Alan Turing in his 1950 seminal paper “Computing Machinery and Intelligence.” We may not need to solve some or even all of these open problems,” wrote Ramani and Wang. “But equally, we cannot yet definitively dismiss them, thus adding to our bottlenecks.”
Current methods may also not be enough. Training advanced AI models requires huge amounts of computer power and electricity, and scaling another order of magnitude would require hundreds of billions of dollars more spending, which isn’t practical and may not even be feasible. We may also be running out of high quality data for training these very large models. Good old-fashioned human tinkering may well be a better approach than brute force scaling.
Humans remain a limiting factor in development. Human feedback during the development and training process makes AI more helpful and more reliable, but the high costs of human input constrain AI-based productivity. That’s unlikely to change, especially given our desire to align AI with human values through human feedback. Technical experts, the public and regulators want to keep humans in the AI loop.
A big share of human knowledge is tacit, unrecorded, and diffuse. Explicit knowledge is formal, codified, and can be readily explained to people and captured in a computer system. But, tacit knowledge, a concept first introduced in the 1950s by Michael Polanyi, is the kind of knowledge we’re often not aware we have, and is therefore difficult to transfer to another person, let alone capture in a computer program. “We can know more than we can tell,” said Polanyi in what’s become known as Polanyi’s paradox. This common sense phrase succinctly captures the fact that we tacitly know a lot about the way the world works, yet aren’t able to explicitly describe this knowledge.
We could be headed off in the wrong direction altogether. “We still struggle to concretely specify what we are trying to build,” said the essay. “We have little understanding of the nature of intelligence or humanity. Relevant philosophical problems, such as the grounds of moral status, qualia, and personal identity, have stumped humans for thousands of years. Thus, we are throwing dice into the dark, betting on our best hunches, which some believe produce only stochastic parrots.”
Even if technical AI progress continues, social and economic hurdles may limit its impact
The history of economic transformation is one of contingency. Like the steam engine, electricity, computers, and more recently, the internet, AI will significantly transform economies, societies, and our personal lives. And, as we’ve learned over the past two and a half centuries, historically transformative technologies have great potential from the outset. But realizing that potential requires a fundamental rethinking of organizations, industries, economies, and societal institutions, as well as major complementary investments including business process redesign; innovative new products, applications and business models; and the re-skilling of the workforce, — all of which take considerable time.
AI may not be able to automate precisely the sectors most in need of automation. Productivity has significantly increased in the sectors that have taken advantage of the breakout advances in digital technologies, such as IT, electronics, manufacturing and financial services. But automation and productivity have significantly lagged in labor-intensive sectors like healthcare, education, government, and transportation.
Automation alone is not enough for transformative economic growth. Even if AI-based automation could help overcome productivity constraints in slow growing sectors, social and political barriers will likely continue to slow down their technology adoption. Slow growing sectors tend to be highly regulated, and thus significantly less sensitive to efficiency and market competition.
A big share of the economy consists of sectors that tend to be more social in nature. “Even if AI can automate all production, we must still decide what to produce, which is a social process,” said the essay. “Education may be largely about motivating students, and teaching them to interact socially, rather than just transmitting facts. … Healthcare combines emotional support with more functional diagnoses and prescriptions. … As long as AI-produced outputs cannot substitute for that which is social, and therefore scarce, such outputs will command a growing ‘human premium,’ and produce Baumol-style effects that weigh on growth.”
In conclusion, the essay asked “How should we consider AI in light of these hurdles?,” and answers its question with three key final observations:
- The most salient risks of AI are likely to be those of a prosaic powerful technology. Rather than scenarios where AI becomes an “autonomous, uncontrollable, and incomprehensible existential threat … we believe AI's most pressing harms are those that already exist or are likely in the near future, such as bias and misuse.”
- Do not over-index future expectations of growth on progress in one domain. Don’t expect AI to clear hurdles we don’t know how to clear ourselves. “We should also not take future breakthroughs as guaranteed — we may get them tomorrow, or not for a very long time.” Cast a wide net, “tracking progress across many domains of innovation, not just progress in AI's star subfield.”
- Accordingly, invest in the hardest problems across innovation and society. “Pause before jumping to the most flashy recent development in AI. From technical research challenges currently not in vogue to the puzzles of human relations that have persisted for generations, broad swaths of society will require first-rate human ingenuity to realize the promise of AI.”
Comments