Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

ABOUT

Subscribe to this blog via email

“The U.S. is spending billions of dollars and burning gigawatts of energy in a rush to beat China to the next evolutionary leap in artificial intelligence — one so great, some boosters say, that it will rival the atomic bomb in its power to change the global order,” wrote the WSJ in a recent article “China Has a Different Vision for AI. It Might Be Smarter.” “Since the release of OpenAI’s ChatGPT nearly three years ago, Silicon Valley has spent mountains of money in pursuit of AI’s holy grail: artificial general intelligence that matches or beats human thinking.”

“China is running a different race,” the article added. “In China, by contrast, leader Xi Jinping has recently had little to say about AGI. Instead, he is pushing the country’s tech industry to be ‘strongly oriented toward applications’ — building practical, low-cost tools that boost China’s efficiency and can be marketed easily.”

AI is clearly a historically transformative technology. The transition to the age of AI will be at least as big and consequential as the transition from the industrial economy to the internet-based digital economy of the past few decades. The machines of the industrial economy made up for our physical limitations, — steam engines enhanced our physical power, railroads and cars helped us go faster, airplanes gave us the ability to fly. But now, technology is being increasingly applied to activities requiring cognitive capabilities and problem solving intelligence that not long ago were viewed as the exclusive domain of human.

The diverging visions for the future of AI represent a head-to-head bet with significant stakes for the US and China. US enthusiasts believe that AGI “will give the U.S. insurmountable military advantages, help cure cancer and solve climate change, and eliminate the need for people to perform routine work such as accounting and customer service.” And, if China’s more pragmatic gamble turns out to be wrong, “it could find itself lagging far behind the U.S. in the most consequential technology of the 21st century.”

“Opinions about artificial intelligence tend to fall on a wide spectrum,” wrote The Economist in a recent article. “At one extreme is the utopian view that AI will cause runaway economic growth, accelerate scientific research and perhaps make humans immortal. At the other extreme is the dystopian view that AI will cause abrupt, widespread job losses and economic disruption, and perhaps go rogue and wipe out humanity.”

But, “What if artificial intelligence is just a “normal” technology?,” asked The Economist in its article’s title. What if its rise might yet follow the path of previous technological revolutions, the article added, citing a paper published earlier this year, “AI as a Normal Technology,” by Princeton computer science professor Arvind Narayanan and his PhD candidate Sashay Kapoor.

“We articulate a vision of artificial intelligence (AI) as normal technology,” noted Narayanan and Kapoor. “To view AI as normal is not to understate its impact — even transformative, general-purpose technologies such as electricity and the internet are ‘normal’ in our conception. But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, a highly autonomous, potentially superintelligent entity.”

“The normal technology frame is about the relationship between technology and society,” the authors added. “It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.”

Their paper is organized into four distinct parts. Let me briefly summarize each of these parts.

The Speed of Progress.

Transformative economic and societal impacts will be slow (on the timescale of decades); there is a critical distinction between AI methods, AI applications, and AI adoption, each of which happen at different timescales.

“Will the progress of AI be gradual, allowing people and institutions to adapt as AI capabilities and adoption increase, or will there be jumps leading to massive disruption, or even a technological singularity? Our approach to this question is to analyze highly consequential tasks separately from less consequential tasks and to begin by analyzing the speed of adoption and diffusion of AI before returning to the speed of innovation and invention.”

As has been the case with general-purpose technologies over the past two centuries, “diffusion is limited by the speed of human, organizational, and institutional change.” The impact of AI will be materialized not when the technologies’ capabilities improve, but when those improvements are translated into concrete applications and are diffused throughout the economy.

What a World With Advanced AI Might Look Like

In a world with advanced AI, there will likely be a division of labor between humans and AI, but control will primarily be in the hands of people and organizations; indeed, a greater and greater proportion of what people do in their jobs is to work with and control AI.

“We argue that reliance on the slippery concepts of ‘intelligence’ and ‘superintelligence’ has clouded our ability to reason clearly about a world with advanced AI. By unpacking intelligence into distinct underlying concepts, capability and power, we rebut the notion that human labor will be superfluous in a world with ‘superintelligent’ AI, and present an alternative vision.”

Potential AI Risks

We examine the implications of AI as normal technology for AI risks. “We analyze accidents, arms races, misuse, and misalignment, and argue that viewing AI as normal technology leads to fundamentally different conclusions about mitigations compared to viewing AI as being humanlike.”

“Our view is that, just like other technologies, deployers and developers should have the primary responsibility for mitigating accidents in AI systems. How effectively they will do so depends on their incentives, as well as on progress in mitigation methods. In many cases, market forces will provide an adequate incentive, but safety regulation should fill any gaps.”

Policy Implications

Given the divergence between different AI futures, — normal technology versus potentially uncontrollable superintellience, — policymakers should focus their efforts on a strategy centered on resilience to improve our ability to deal with unexpected developments in the future.

“We advocate for reducing uncertainty as a first-rate policy goal and resilience as the overarching approach to catastrophic risks. We argue that drastic interventions premised on the difficulty of controlling superintelligent AI will, in fact, make things much worse if AI turns out to be normal technology — the downsides of which will be likely to mirror those of previous technologies that are deployed in capitalistic societies, such as inequality.”

Final Thoughts

“AI as normal technology is a worldview that stands in contrast to the worldview of AI as impending superintelligence,” wrote Narayanan and Kapoor in conclusion. “Over time, however, the superintelligence view has become dominant in AI discourse, to the extent that someone steeped in it might not recognize that there exists another coherent way to conceptualize the present and future of AI. Thus, it might be hard to recognize the underlying reasons why different people might sincerely have dramatically differing opinions about AI progress, risks, and policy.”

This question is particularly important because, as the WSJ article observed, “With growing fears of an AI bubble, Beijing is charting a pragmatic alternative to Silicon Valley’s pursuit of artificial superintelligence.” Given that it’s uncertain how soon (if ever) AGI can be achieved, “our nation risks falling behind China, which is far less concerned with creating A.I. powerful enough to surpass humans and much more focused on using the technology we have now.”

Posted in , , , , , , ,

One response to “Artificial Superintelligence or Normal Technology?”

  1. devm Avatar

    Irving, excellent piece – and one that cuts through the noise at exactly the right moment. Your framing of the “normal technology” versus “superintelligence” debate is crucial, especially as we watch billions flow toward AGI moonshots while China quietly builds the pragmatic applications that will actually move the economic needle.

    From where I sit, working with organizations on transformation and teaching management, your core argument about diffusion being “limited by the speed of human, organizational, and institutional change” isn’t just theory – it’s what I see every day. The gap between AI capability and AI adoption isn’t technical, it’s human. And nowhere is this more evident than in education, where we’re witnessing exactly the kind of slow, uncertain adoption you describe.

    Your framework helps clarify what’s really happening. This isn’t a “cheating crisis” – it’s a diffusion mismatch. Microsoft’s 2025 AI in Education Report shows over 80% of education organizations using generative AI. We’re deploying the technology faster than we’re developing the institutional frameworks to guide it. In my classroom, I’ve used Claude to redesign case studies – shorter, more focused – which has increased both engagement and learning outcomes. Students prefer AI tutor bots for practice tests. These tools work.

    This connects directly to your point about “division of labor between humans and AI, but control will primarily be in the hands of people.” Entry-level hiring at major tech firms has dropped 50% since 2019 – full disclosure, I’ve helped my clients do this in tech, call centers and marketing – not because AI is replacing humans, but because we’re redesigning workflows around human-AI collaboration. Education needs the same intentional redesign. McKinsey suggests AI could save teachers 13 hours per week. The question you’re really asking is: do we use that time to teach students to be thoughtful users and critics of AI, or waste it on honor code enforcement?

    Your emphasis on resilience and reducing uncertainty rather than fear-based policy is crucial here. We’re repeating the pattern you warned about: reacting rather than designing. Your piece reminds me of your enthusiasm and prescience in the early days of the commercial internet at IBM – you saw what reduction of friction and the network economy could bring, and most of that came true. But we didn’t anticipate the harm from fast, ubiquitous misinformation spread. I hope we’re learning from that experience. This time, let’s be proactive about designing AI integration that builds human capability rather than bypasses it.

    Your “normal technology” framework gives us permission to stop catastrophizing and start building. At the enterprise level, AI success has been mixed – exactly the pattern you describe. Meanwhile, individual adoption is ubiquitous because the tools solve real problems. Education needs to follow the same path: focus on solving real learning problems with AI.

    Thanks for bringing this clarity at exactly the right moment. Your voice has been crucial at every major technology inflection point, and this one is no different.

Leave a Reply

Discover more from Irving Wladawsky-Berger

Subscribe now to keep reading and get access to the full archive.

Continue reading