Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

ABOUT

Subscribe to this blog via email

The July 26 issue of The Economist included a special focus on “The Economics of Superintelligence” with 
three articles on the subject. “For most of
history the safest prediction has been that things will continue much as they are,” said
the lead article. “But sometimes the f
uture is unrecognisable. The tech bosses of Silicon Valley say humanity is approaching such a moment, because in just a few years artificial intelligence (AI) will be better than the average human being at all cognitive tasks. You do not need to put high odds on them being right to see that their claim needs thinking through. Were it to come true, the consequences would be as great as anything in the history of the world economy.”Economist 2025-08-04 at 10.50.52 AM

Over the past decade, AI’s powers and breakthroughs have repeatedly outrun predictions. In 2016, AlphaGo, —deep-learning Go program developed by Google Deep Mind, — unexpectedly beat Lee Sedol, one of the world’s top Go players by a large margin, even though experts had predicted that it would take several years for AlphaGo to defeat top human professional players. This was considered a major milestone in the history of deep-learning AI, right up there with IBM’s Deep Blue equally unexpected victory over then reigning chess world champion Garry Kasparov in 1997.

Then on November 30 of 2022, OpenAI released ChatGPT, an AI model which interacts with users in a conversational large language model (LLM). “The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” OpenAI encouraged users to try ChatGPT, and just five days after its release over one million had done so.

“What does that say about AI’s powers in 2030 or 2032?,” asked The Economist. Opinions vary. Some, like OpenAI’s CEO Sam Altman recently posted a blog, “The Gentle Singularity,” in which he wrote that “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”

“In some big sense, ChatGPT is already more powerful than any human who has ever lived,” he added. “Hundreds of millions of people rely on it every day and for increasingly important tasks; a small new capability can create a hugely positive impact; a small misalignment multiplied by hundreds of millions of people can cause a great deal of negative impact.”

Not surprisingly, Altman is quite optimistic. “AI will contribute to the world in many ways, but the gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present. Scientific progress is the biggest driver of overall progress; it’s hugely exciting to think about how much more we could have.” 

But he also notes that while “the 2030s are likely going to be wildly different from any time that has come before, … in the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes.”

Meta’s CEO Mark Zukerberg is similarly optimistic. “Over the last few months we have begun to see glimpses of our AI systems improving themselves,” he recently wrote. “The improvement is slow for now, but undeniable. Developing superintelligence is now in sight. It seems clear that in the coming years, AI will improve all our existing systems and enable the creation and discovery of new things that aren't imaginable today. But it is an open question what we will direct superintelligence towards.”

“I am extremely optimistic that superintelligence will help humanity accelerate our pace of progress,” he added. “But perhaps even more important is that superintelligence has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose.”

However, in a WSJ article, “Why Superintelligent AI Isn’t Taking Over Anytime Soon,” technology columnist Christopher Mims wrote that “Despite claims from top names in AI, researchers argue fundamental flaws in reasoning models mean bots aren’t on the verge of exceeding human smarts.” 

Sims references “The Illusion of Thinking,” a recent recent paper in which six Apple researchers evaluated a number of large reasoning models (LRMs) from leading AI labs. LRMs are large language models that spend considerable more time analyzing problems using chain-of-thought, a technique that improve the reasoning ability of LLMs by inducing them to solve a problem by going through a series of intermediate steps before giving a final answer.

After evaluating the performance of LRMs from different vendors on a diverse set of problems, the Apple researchers found little evidence that they are capable of reasoning anywhere close to the level their makers claim. They showed that the reasoning ability of LRMs increases with problem complexity up to a point, then declines, and eventually collapses beyond certain problem complexities. By comparing LRMs with  standard LLMs, they identified three performance levels:

  1. low-complexity tasks where standard models surprisingly outperform LRMs;
  2. medium-complexity tasks where additional reasoning in LRMs demonstrates advantage; and
  3. high-complexity tasks where both LLMs and LRMs completely collapse.

Sims warns that “one of the biggest dangers of AI is that we overestimate its abilities, trust it more than we should — even as it’s shown itself to have antisocial tendencies such as ‘opportunistic blackmail’ — and rely on it more than is wise. In so doing, we make ourselves vulnerable to its propensity to fail when it matters most.”

These various articles reminded me that in 1930,  John Maynard Keynes, one of the most influential economists of the 20th century, published an essay in which he wrote:

“We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come, namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.”

Given that “the increase of technical efficiency has been taking place faster than we can deal with the problem of labour absorption,” Keynes predicted that by 2030, most people would be working a 15-hour week which would satisfy their need to work in order to feel useful and contended. “Thus for the first time since his creation man will be faced with his real, his permanent problem-how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”

We are, only five years away from 2030. Will Keynes prediction come true? What if AI made the world’s economic growth explode?,” as another article in The Economist’s issue on superintelligence asked.  

If the evangelists of Silicon Valley are to be believed, economic growth is about to significantly accelerate, said the article. “They maintain that artificial general intelligence (AGI), capable of outperforming most people at most desk jobs, will soon lift annual GDP growth to 20-30% a year, or more. That may sound preposterous, but for most of human history, they point out, so was the idea that the economy would grow at all.”

“The likelihood that AI may soon make lots of workers redundant is well known. What is much less discussed is the hope that AI can set the world on a path of explosive growth. That would have profound consequences. Markets not just for labour, but also for goods, services and financial assets would be upended. Economists have been trying to think through how AGI could reshape the world. The picture that is emerging is perhaps counterintuitive and certainly mind-boggling.”

Most economists agree that AI has the potential to raise productivity and thus boost GDP growth.” The burning questions are, how much? and how long it will take? Some, like MIT’s Nobel laureate economist Daron Acemoglu believe that AI will contribute only modest improvements to worker productivity and will add no more than 1 percent to U.S. economic output over the next decade.

“AI will have implications for the macroeconomy, productivity, wages and inequality, but all of them are very hard to predict,” wrote Acemoglu in “The Simple Macroeconomics of AI,” an article published in May of 2024. “This has not stopped a series of forecasts over the last year, often centering on the productivity gains that AI will trigger. Some experts believe that truly transformative implications, including artificial general intelligence (AGI) enabling AI to perform essentially all human tasks, could be around the corner.” But, he is skeptical of the significantly higher estimates made by AI boosters.

“Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing, etc. as well as create new products and platforms,” added Acemoglu. But the economic impact of historically transformative technologies like AI take time to play out. In the near future, AI will primarily increase the efficiency of existing production processes and workers, so the impact over the shorter horizon depends wholly on the number of production processes that the technology will automate, which he expects to be less that 5% of all tasks.

Despite the sky-high valuations of tech firms, markets are very far from pricing in explosive growth, wrote The Economist in conclusion. “Silicon Valley, in other words, has yet to convince the world of its thesis. But the progress of AI has for the best part of a decade outpaced forecasts of when it would pass various benchmarks. … If the consensus about AI’s effects on the economy is as behind-the-curve as most predictions of AI’s capabilities have been, then investors — and everyone else — are in for a big surprise.”

Posted in , , , , , , ,

Leave a Reply

Discover more from Irving Wladawsky-Berger

Subscribe now to keep reading and get access to the full archive.

Continue reading