“What if the A.I. Boosters Are Wrong?,” asked a recent NY Times article in its title. “A skeptical paper by Daron Acemoglu, a labor economist at M.I.T., has triggered a heated debate over whether artificial intelligence will supercharge productivity,” it added. “The bullish camp has great hopes for AI,” such as whether the technology will usher the next industrial revolution which could help wipe out poverty around the world. “But if the boosters are wrong, it could be trouble for the developed world, which is in desperate need of a productivity breakthrough as its work force ages and declines.”
(Before continuing with the blog, let me mention that on October 14, 2024, Professor Acemoglu was awarded the 2024 Nobel Memorial Prize in Economic Sciences along with his MIT colleague Simon Johnson, and University of Chicago economist James Robinson.)
“Artificial intelligence (AI) has captured imaginations,” wrote Acemoglu in “The Simple Macroeconomics of AI,” the skeptical paper referenced in the NYT article. “Promises of rapid, even unparalleled, productivity growth as well as new pathways for complementing humans have become commonplace. There is no doubt that recent developments in generative AI and large language models that produce text, information and images — and Shakespearean sonnets — in response to simple user prompts are impressive and even spellbinding.”
“AI will have implications for the macroeconomy, productivity, wages and inequality, but all of them are very hard to predict,” he added. “This has not stopped a series of forecasts over the last year, often centering on the productivity gains that AI will trigger. Some experts believe that truly transformative implications, including artificial general intelligence (AGI) enabling AI to perform essentially all human tasks, could be around the corner.” But, according to Acemoglu, AI will contribute only modest improvements to worker productivity and will add no more than 1 percent to U.S. economic output over the next decade.
The bullish camp has great hopes for AI. For example, a 2023 report by Goldman Sachs estimated that generative AI (GenAI) could raise annual US labor productivity growth by around 1.5% over a 10 year period following widespread adoption, which could eventually increase annual global GDP by an economically significant 7%. And a 2023 McKinsey report suggested that GenAI could boost the global economy by $17.1 to $25.6 trillion over the coming decade.
George Mason University economist Tyler Cowen argued in a recent blog that the model behind Acemoglu’s analysis is wrong and underplays AI’s potential to spur scientific advances and business innovations. “A lot of the benefits of A.I. will come from getting rid of the least productive firms,” wrote Cowen.
So, who is right, the bullish or the skeptic AI camp? Most everyone agrees that after decades of unfulfilled promises and hype, AI will likely become one of the most transformative technologies of the 21st century over time. But in the near term, a number of recent articles have been asking, “Is the AI Revolution Losing Steam?” “The pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant,” said a WSJ article published earlier this year. “It sure seems like the AI hype train is just leaving the station, and we should all hop aboard. But significant disappointment may be on the horizon, both in terms of what AI can do, and the returns it will generate for investors.”
To shed light on this question, I turned to “Gen AI: Too Much Spend, Too Little Benefit?, a report by Goldman Sachs Research published in June of 2024, — part of its Top of Mind series of market insights created and edited by Goldman Sachs (GS) senior strategist Allison Nathan.
“The promise of generative AI technology to transform companies, industries, and societies is leading tech giants and beyond to spend an estimated ~$1tn on capex in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid,” wrote Nathan. “But this spending has little to show for it so far beyond reports of efficiency gains among developers.” Her report includes a number of interviews, including one with Professor Acemoglu that I found quite insightful. Let me discuss a few of the topics explored in the interview.
Nathan started the interview by asking: “In a recent paper, you argued that the upside to US productivity and, consequently, GDP growth from generative AI will likely prove much more limited than many forecasters — including Goldman Sachs — expect. Specifically, you forecast a ~0.5% increase in productivity and ~1% increase in GDP in the next 10 years vs. GS economists’ estimates of a ~9% increase in productivity and 6.1% increase in GDP. Why are you less optimistic on AI’s potential economic impacts?”
Acemoglu replied that the forecast differences are primarily about the timing of AI’s economic impacts rather than about the ultimate promise of the technology. “Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing, etc. as well as create new products and platforms.” But the economic impact of historically transformative technologies like AI take time to play out.
“AI model advances likely won’t occur nearly as quickly — or be nearly as impressive — as many believe.” In the near future, AI will primarily increase the efficiency of existing production processes and workers, so the impact over the shorter horizon depends wholly on the number of production processes that the technology will automate, which Acemoglu expects to be less that 5% of all tasks. In the near term AI will primarily be of help in automating relatively simple tasks. Many tasks that humans currently perform, for example in the areas of transportation, manufacturing, mining, etc., are multifaceted and require real-world interaction, which AI won’t be able to materially improve anytime soon.
Nathan then asked: “While AI technology cannot perform many complex tasks well today — let alone in a cost-effective manner — the historical record suggests that as technologies evolve, they both improve and become less costly. Won’t AI technology follow a similar pattern?”
“Absolutely,” replied Acemoglu. “But I am less convinced that throwing more data and GPU capacity at AI models will achieve these improvements more quickly.” For open-ended tasks like customer services or understanding and summarizing text, it’s not clear that doubling the AI’s compute and data infrastructure will double AI’s capabilities. Major AI advances will require much more than more data and GPU capacity.
In addition, “the current architecture of AI technology itself may have limitations. Human cognition involves many types of cognitive processes, sensory inputs, and reasoning capabilities.” Large language models (LLMs) today have proven more impressive than many people would have predicted, but a big leap of faith is still required to believe that the ability to statistically predict the next word in a sentence will achieve anything close to human-level intelligence.
“Over the longer term, what odds do you place on AI technology achieving superintelligence?”
“I question whether AI technology can achieve superintelligence over even longer horizons because, as I said, it is very difficult to imagine that an LLM will have the same cognitive capabilities as humans to pose questions, develop solutions, then test those solutions and adopt them to new circumstances.”
“So, could the impact of AI technology over the longer term prove more significant than you expect?”
“Technological innovation has undoubtedly meaningfully impacted nearly every facet of our lives. But that impact is not a law of nature. It depends on the types of technologies that we invent and how we use them. So, again, my hope is that we use AI technology to create new tasks, products, business occupations, and competencies. … Such an evolution would ultimately lead to much better possibilities for human discovery. But it is by no means guaranteed.”
“Given everything we’ve discussed, is the current enthusiasm around AI technology overdone?,” asked Nathan in conclusion.
“Every human invention should be celebrated, and generative AI is a true human invention. But too much optimism and hype may lead to the premature use of technologies that are not yet ready for prime time. This risk seems particularly high today for using AI to advance automation.”
“Although I don't believe superintelligence and evil AI pose major threats, I often think about how the current risks might be perceived looking back 50 years from now. The risk that our children or grandchildren in 2074 accuse us of moving too slowly in 2024 at the expense of growth seems far lower than the risk that we end up moving too quickly and destroy institutions, democracy, and beyond in the process. So, the costs of the mistakes that we risk making are much more asymmetric on the downside. That’s why it’s important to resist the hype and take a somewhat cautious approach, which may include better regulatory tools, as AI technologies continue to evolve.”
Comments