“It sure seems like the AI hype train is just leaving the station, and we should all hop aboard,” said “The AI Revolution Is Already Losing Steam,” a recent WSJ article by technology columnist Christopher Mims. “But significant disappointment may be on the horizon, both in terms of what AI can do, and the returns it will generate for investors. The rate of improvement for AIs is slowing, and there appear to be fewer applications than originally imagined for even the most capable of them. It is wildly expensive to build and run AI. New, competing AI models are popping up constantly, but it takes a long time for them to have a meaningful impact on how most people actually work.”
As we’ve learned since the advent of the Industrial Revolution, there’s generally been a significant time lag between the broad acceptance of a major new transformative technology and its ensuing impact on industries and economies. After transitioning from the R&D labs to the marketplace, it takes considerable time, — often decades, — for the products, services, and business models based on the new technologies to be widely embraced across economies and societies, and for their full benefits to be realized.
The reason, explained economists Erik Brynjolfsson, Daniel Rock, and Chad Syverson in “The Productivity J-Curve,” a 2018 NBER working paper, is that while historically transformative technologies have the potential to radically change the economic environment, “realizing that potential requires larger intangible and often unmeasured investments and a fundamental rethinking of the organization of production itself.”
Their paper identifies two phases, investment and harvesting, in the life cycle of a historically transformative technology. In general, these technologies require massive complementary investments, such as business process redesign, co-invention of new products and business models, and the re-skilling of the workforce. Moreover, the more transformative the technologies, the longer it takes for them to reach the harvesting phase when they are widely embraced by companies and industries across the economy. Such an evolution has been has been pithily captured in what’s become known as Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
AI has been overestimated since it emerged as a promising area of computer science research in the 1950s with the aim to develop intelligent machines capable of handling a variety of human-like tasks. But after a couple of decades of unfulfilled promises and hype, AI went through a so called AI winter of reduced interest and funding that nearly killed the field in the 1980s.
AI was reborn in the 1990s as a data-centric discipline. Instead of trying to precisely express as software human-like intelligence, the field embraced a statistical approach based on searching for patterns in vast amounts of data with increasingly powerful supercomputers and sophisticated algorithms. In the 2000s, the explosive growth of the internet led to innovations in big data and data science, followed in the 2010s with major advances in multi-layered deep learning algorithms. More recently, the advent of foundation models, including generative AI (GenAI), large language models, and chatbots, have been taking the AI revolution to a whole new level.
There’s little question that AI has become the defining technology of our era. But, could unrealistic expectations and hype once more lead to unfulfilled promises and disappointment? The WSJ article points out that questions are being raised “about whether AI could become commoditized, about its potential to produce revenue and especially profits, and whether a new economy is actually being born. They also suggest that spending on AI is probably getting ahead of itself in a way we last saw during the fiber-optic boom of the late 1990s — a boom that led to some of the biggest crashes of the first dot-com bubble.”
A potential argument that we may be in an AI bubble, is a recent estimate by the VC firm Sequoia that the AI industry spent $50 billion on the Nvidia chips used to train advanced AI models in 2023, but brought in only $3 billion in revenue. “That difference is alarming, but what really matters to the long-term health of the industry is how much it costs to run AIs.”
But, according to Christopher Mims, “Changing people’s mindsets and habits will be among the biggest barriers to swift adoption of AI. That is a remarkably consistent pattern across the rollout of all new technologies.” In other words, the marketplace success of a historically transformative technology like AI is as dependent on its acceptance across the economy as it is on its technological advances.
To illustrate his point, Mims referenced a second WSJ article he recently published, “What I Got Wrong in a Decade of Predicting the Future of Tech.” “Over nearly 500 articles, I’ve made plenty of mistakes,” he wrote. Here are five big lessons those blown calls and boneheaded pronouncements have taught me along the way”:
Disruption is overrated. “The most-worshiped idol in all of tech — the notion that any sufficiently nimble upstart can defeat bigger, slower, sclerotic competitors — has proved to be a false one.” Disruption does happen, but for many reasons, it doesn’t happen nearly as often as we’ve been led to believe. One reason is that “many tech leaders have internalized a hypercompetitive paranoia … that inspires them to either acquire or copy and kill every possible upstart.”
Hardly a day goes by, he adds, “when a startup, investor, or journalist — including yours truly — doesn’t trumpet the power of a new technology to completely upend even the biggest and most hidebound of industries. Don’t believe it. In a world in which companies learn from one another faster than ever, incumbents have an ability to reinvent themselves at a pace that simply wasn’t possible in the past.”
Human factors are everything. “What’s the number one factor governing the pace of technological change?,” he asked. Experts often cite R&D spending or a country’s net brain power, — “the fallacy that all it takes for the next big thing to transform our lives is for it to be invented.”
“I’ve made this error again and again,” he said. We are creatures of habit. “The challenge of getting people to change their ways is the reason that adoption of new tech is always much slower than it would be if we were all coldly rational utilitarians bent solely on maximizing our productivity or pleasure.”
We’re all susceptible to this one kind of tech B.S. “Tech is, to put it bluntly, full of people lying to themselves. As countless cult leaders, multilevel marketing recruits, and CrossFit coaches know, one powerful way to convince people that following you will change their life is to first convince yourself.”
This is not surprising. “Today’s venture-capital backed founders need to have a vision, articulate it clearly, and convince everyone around them that joining up is the equivalent of finding a winning lottery ticket.” Beyond startups, “tech CEOs have to go through this same ritual every time they launch some big new venture or pivot their company, even though most of those efforts will come to naught.”
Tech bubbles are useful even when they’re wasteful. The amounts of money thrown at startups at the height of a tech investment bubble can seem like a kind of folly of people who’ve given up solving real problems, said Mims. But while most new ideas aren’t likely to go anywhere, they’re very good for innovation in general, citing something Bill Gates mentioned in a 2014 Rolling Stone interview. After noting that innovation in California was at its absolute peak, Gates added: “Sure, half of the companies are silly, and you know two-thirds of them are going to go bankrupt, but the dozen or so ideas that emerge out of that are going to be really important.”
“A decade on, it appears he was correct, said Mims. “The last tech bubble gave us some deeply unserious ‘innovations’ like Web3 and the metaverse. But it also gave us a fourth industrial revolution, powered by the mobile internet, automation and artificial intelligence, the impacts of which will be playing out for decades to come.”
We’ve got more power than we think. “Having all the wealth and technology in the world doesn’t matter if we don’t have the wisdom to use it in the right manner. Early in my career, I bought into the notion, espoused by science-fiction author William Gibson, that all cultural change is driven by technology.”
“I’ve now witnessed enough of both technological and social change to understand that the reverse is also — and perhaps more often — true. Collectively, we have agency over how new tech is developed, released, and used, and we’d be foolish not to use it. Creating and rolling out new tech without guardrails is a recipe for a world in which tech is as likely to supercharge our worst impulses, as it is to enhance our lives. … By paying attention to what’s just over the horizon, my hope is that in our collective, imperfect, democratic way, we can figure out how to use new technologies, rather than being used by them.”
“None of this is to say that today’s AI won’t, in the long run, transform all sorts of jobs and industries,” wrote Mims in conclusion. “The problem is that the current level of investment — in startups and by big companies — seems to be predicated on the idea that AI is going to get so much better, so fast, and be adopted so quickly that its impact on our lives and the economy is hard to comprehend. Mounting evidence suggests that won’t be the case.”
Great piece and I pretty much agree with all of it. I'm not sure we're in a bubble per se but I do think that a lot of the greatest opportunities are probably going to be realized further out than some are counting on.
Posted by: Gordon Haff | September 07, 2024 at 10:25 AM
Thanks for sharing the nice blog
boom barrier dealers in hyderabad
Posted by: Ravi | September 10, 2024 at 06:00 AM