Hard Fork is a weekly podcast from the NY Times that examines the impact of the latest developments in technology on the economy and society. The podcast is hosted by Kevin Roose, NY Times technology journalist, and Casey Newton, founder of the Platformer newsletter. A few weeks ago the podcast discussed Is AI a ‘Normal' Technology? with Princeton computer science professor Arvind Narayanan, co-author of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference with Princeton PhD candidate Sashay Kapoor, a book that was published in September of 2024.
Roose and Newton started the podcast by noting that the previous week their guest at the podcast had been AI researcher Daniel Kokotailo, founder and executive director of the AI Futures Project, where they discussed their new article, AI 2027, which predicted that “the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.” Narayanan is highly skeptical of the fast takeoff predictions of AI 2027. In a recent essay also co-authored with Kapoor, “AI as a Normal Technology,” they laid out an alternative vision of AI, “basically one that treats AI not as some looming superintelligence that’s going to go rogue and take over humanity, but as a type of technology like any other, — like electricity, like the internet, like the PC, — that have taken a period of years or even decades to fully diffuse throughout society.”
This is a very important question I’ve been thinking a lot about. Will AI’s impact on economies and societies be similar to those of previous technological revolutions of the past two and a half centuries, — that is, Is AI a normal technology?, — or is AI in a class by itself, destined to exceed the impact of the Industrial Revolution in just one decade?
In “The Productivity J-Curve,” an article published a few years ago, economists Erik Brynjolfsson, Daniel Rock, and Chad Syverson analyzed the impact of historically transformative technologies throughout the Industrial Revolution, including more recent ones like computers, the Internet, and now AI.
“General purpose technologies (GPTs) are engines for growth,” they wrote in “The Productivity J-Curve.” “These are the defining technologies of their times and can radically change the economic environment. They have great potential from the outset, but realizing that potential requires larger intangible and often unmeasured investments and a fundamental rethinking of the organization of production itself.” Since these technologies are general purpose in nature, they require massive complementary investments which often take decades, — e.g., business process redesign, the re-skilling of the workforce, and co-invention of new products and business models.
They further explained in “AI and the Modern Productivity Paradox that the deployment time-lags of AI may well be longer, because attaining the full benefits of AI’s potential will likely require a larger number of complementary co-inventions, investments, and regulations. AI is likely to become one of the most important technologies of the 21st century, but we’re still in the early stages of AI’s diffusion. Considerable innovations, investments, and regulatory policies will be required for its wider deployment in highly complex areas like robotics, self-driving cars, intelligent assistants, and smart healthcare applications.
“To view AI as normal is not to understate its impact — even transformative, general-purpose technologies such as electricity and the internet are ‘normal’ in our conception,” wrote Narayanan and Kapoor in their essay “AI as a Normal Technology.” “But it is in contrast to both utopian and dystopian visions of the future of AI which have a common tendency to treat it akin to a separate species, — a highly autonomous, potentially superintelligent entity.”
“The statement ‘AI is a normal technology’ is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it. We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future.”
“The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.”
Let me summarize some of the key points brought up in the discussion between Newton and Roose, the Hard Fork hosts, and Narayanan.
Hard Fork: “One of the core arguments you make is that AI progress, or the fast takeoff scenario that some folks, including former guests of this show, have envisioned is not going to happen because it’s going to be bottle-necked by this slower process of diffusion.” Basically, you argue that while R&D labs keep inventing AI technologies that can do all kinds of amazing and useful things, people and institutions are much slower to change.
Narayanan: “Our view is that it actually doesn’t seem like technology adoption is getting faster.” While we’re familiar with the claim that 40% of US adults are using GenAI, there is a difference between heavily using an AI technology and relying on it for work. The intensity of use is only something like one hour per workweek, so it translates to only a fraction of a percentage point increase in productivity.
Hard Fork: What do you make of the thesis of the AI Futures Project that “we’ll start to have these autonomous coding agents that will automate the work of AI research and development, and will essentially speed up the iteration loop for creating more and more powerful AI systems.” Where is the hole in that scenario?
Narayanan: We completely agree that AI capabilities are already improving rapidly and could be further accelerated with the use of AI itself for AI development. But that doesn’t mean that these AI systems will necessarily become more powerful in a good way. “Power is not just a property of the AI system itself. It’s a property of both the AI system and the environment in which it is deployed. And that environment is something that we control.” We should be careful with how much control, autonomy and power we hand over to AI systems before we understand how well they work and how they can go wrong. In addition, it’s also not clear when and if AI is going to make people more efficient and improve the productivity of the economy.
Hard Fork: “What are some of the natural breaks that you see happening in organizations that prevent technology from spreading faster than it does today?”
Narayanan: “This is where we think we can learn a lot from past technologies.” Let’s first look at safety. During the first several decades of the history of automobiles, vehicle safety was not considered the responsibility of manufacturers. It was entirely on the user. But, “once safety began to be seen as a responsibility of manufacturers, it no longer made business sense for them to develop cars with very poor safety engineering because whenever those cars caused accidents, there would be a negative PR consequence for the car company.” And once it becomes clear who is responsible for negative safety consequences, you can have regulations and set safety standards. This will force companies to deploy AI under supervised, controlled conditions.
Hard Fork: Another aspect of safety is the alignment, that is, that we should build AI systems that adhere to human values. “And that if we don’t do that, there is some potential that eventually, they will go rogue and wreak havoc. You are very skeptical about the current approach to model alignment. Why is that?”
Narayanan: Recall the difference between capability and power. As a result of being more capable, AI systems will become more powerful. “And once you have these super powerful systems, we have to ensure that they are aligned with human values. Otherwise, they’re going to be in control of whole economies or critical infrastructure or whatever. And if they’re not aligned, they can go rogue and they can have catastrophic consequences for humanity.”
We need strict safety and alignment standards. “We don’t think one should get to the super power stage, and if you get to that stage, then tinkering with these technical aspects of AI systems is a fool’s errand. It’s just not going to work. Where we need to put the brakes is between those increases in capabilities and saying, oh, AI is doing better than humans now. We don’t need humans to provision. We’re going to put AI in charge of all these things. And that is something where we do think we can exercise agency.”
Hard Fork: “The leading AI labs are all trying to give their models more agency, more autonomy, to allow them to do longer sequences of tasks without requiring a human to intervene. Their goal, many of them, is to build these fully autonomous drop in remote workers that you could hire at your company and tell them to go do something, and then come back a month later and it’s done or a week later. Are you saying that this is technologically impossible, or are you just saying that it’s a bad idea, and we should stop these companies from giving their models more autonomy without human intervention?”
Narayanan: “We’re not saying it’s technologically impossible, but we think the timelines are going to be much, much longer than the AI developers are claiming. To be clear, I agree with you, Kevin. You wrote recently in a NYT article that within perhaps a couple of years, AI companies are going to start declaring that they have built AGI. However, we don’t think what they’re going to choose to call it AGI based on their pronouncements so far, that is, the kind of AI that will actually be able to replace human workers across the whole spectrum of tasks in a meaningful way.”
“So first of all, our claim is that it’s going to take a long time. It’s going to take a feedback loop of learning from experience in real world contexts to get to actual drop in replacements for human workers, if you will. But our second claim is that even if and when that is achieved, for companies to put that out there with no supervision would be a very bad idea. We do think there are market incentives against that. But there also needs to be regulation.”
Hard Fork: One of the things that was so useful about the AI 2027 scenario sketched out by Daniel Kokotailo and his colleagues in the AI Futures Project “is that it just made it very vivid and visceral for people to try to imagine what the near future could look like, if they’re right” said the Hard Fork hosts in conclusion. “I’m wondering if you could paint a picture for us of what the world of AI as a normal technology will look like a few years from now.”
Narayanan: “The world in 2027 is still pretty much the world we’re in today. The capabilities will have increased a little bit, and the work hours of people using AI are going to have increased from, I don’t know, three hours per week to five hours per week or something like that. I might be off with the numbers, but I think qualitatively the world is not going to be different.”
“But a decade or two from now, I do think qualitatively the world will be different, said Narayanan in conclusion. “Before the Industrial Revolution, most jobs were manual and eventually most manual jo got automated. In fact, back then, a lot of what we do now wouldn’t even have seemed like work.”
“Work meant physical labor. That was the definition of work. So the definition of work fundamentally changed at one point in time. We do think the definition of work is going to fundamentally change again. … AI systems will be capable of doing or at least mediating a lot of the cognitive work that we do today. And because we think it’s so important that we don’t hand over too much power to these AI systems, and because we think people and companies will recognize that, a lot of what it means to do a job will be supervising those AI systems.”
“It takes a surprising amount of effort, I think, to communicate what we want out of a particular task or a project to let’s say, a human contractor, and we think that the same thing is going to happen with AI. So a lot of what’s involved in jobs is just specifying the task. And a lot of what is going to be involved is the monitoring of AI and ensuring that it’s not running amok.”
Comments