The Digitalist Papers is a series of essays that aim to offer insights on the “possible futures that the AI revolution might produce.” The first volume, released in September 2024, included twelve essays that explored the intersection of AI and democracy in America. Volume 2, released in December 2025, includes 21 essays that examined the economics of transformative AI.
Volume 2 was edited by Daniel Susskind, Erik Brynjolfsson, Anton Korinek, Alex Pentland, and Ajay Agrawal. “AI capabilities have been relentlessly improving across a variety of benchmarks. And in November 2022, a new chapter in this story began,” they wrote in the Volume 2 Introduction. While Volume 1 explored the implications of AI for democracy, “this second volume turns to its economic dimensions, focusing on how transformative AI might alter production, work, and prosperity itself. It brings together leading economists, technologists, philosophers, and others to explore the key economic challenges and opportunities of ‘transformative AI,’ or TAI. Most significantly, though, it is also focused on how we might respond to whatever lies ahead.”
“In what follows, we define ‘TAI’ as advanced AI systems that will usher in an economic transformation comparable to the Industrial Revolution, but on a much shorter timescale,” they wrote. “They exhibit intelligence surpassing the most capable humans across multiple fields, capable of performing not only routine cognitive work, but also complex tasks like solving mathematical theorems, creating novel works, or directing experiments autonomously.”
AI’s technological progress is often described using terms like artificial general intelligence (AGI) and artificial superintelligence (ASI) — that is, AI systems that will eventually match or surpass human capabilities across virtually all cognitive tasks. Transformative AI (TAI), by contrast, reflects a growing consensus in policy circles that even if AI does not fully reach human-level cognition, it could still have an impact on society comparable to the agricultural or industrial revolutions.
“A race is underway not only to build ever more powerful AI systems, but also to design reliable benchmarks with which to monitor that progress,” said the Volume 2 Introduction. Where is this progress taking us? “Throughout 2025, the leaders of the largest AI companies set out their predictions for what lies ahead. All described a similarly remarkable trajectory for AI — though each framed it somewhat differently:”
- “We are now confident that we know how to build AGI as we have traditionally understood it,” said Sam Altman, co-founder of OpenAI.
- “A system that’s capable of exhibiting all the cognitive capabilities humans have is probably three to five years away,” predicted Demis Hassabis, co-founder of DeepMind.
- “Powerful AI is two to three years away,” remarked Dario Amodei, founder of Anthropic.
Driving this remarkable progress — and underwriting these confident claims — are extraordinary levels of investment. As the Volume 2 editors observed, “there are very few individual technical challenges — if any — at which so much financial resource has been directed.” To put these investments in perspective, they cited two specific examples:
- The Apollo Project to put a man on the moon in the 1960s, which ran for more than a decade, cost around $25 billion in 1973 dollars — about $180 billion today — roughly comparable to a single year of U.S. private investment in AI.
- The Manhattan Project to build the atomic bomb in the 1940s cost around $2 billion in 1945 dollars — about $11.6 billion today — barely a tenth of what U.S. companies now invest in AI annually.
“And yet, despite all this — the remarkable achievements in AI to date, the striking predictions of what lies ahead, and the enormous financial investment fueling this activity — comparatively little attention is devoted to understanding the economic consequences of what is being built. What’s more, when debates about these possible futures do take place, they tend to unfold within communities of like-minded people, separated by organizational silos and disciplinary boundaries.”
In “The San Francisco Consensus,” one of the Volume 2 essays, former Google Chairman and CEO Eric Schmidt described one such like-minded community.
“Those following current debates in Silicon Valley would be forgiven for thinking that we in the AI community don’t know what we’re talking about,” he wrote. “AI experts are divided on a host of issues. Perhaps most famously, diverging assessments of the existential risks posed by AI have separated ‘doomers’ and ‘accelerationists.’ But leading thinkers also disagree on the relative merits of open and closed models, the benefits of regulation, and the national security implications for deterrence, to highlight just a few unresolved questions.”
“Yet beneath the apparent discord lies a deeper consensus around a number of key ideas,” Schmidt added. “Most of those leading the development of AI agree on at least three central premises.”
- First, they believe in the power of so-called scaling laws — the idea that ever-larger models can continue to drive rapid progress in AI.
- Second, they think the timeline for this revolution is much shorter than previously expected, with many now seeing superintelligence emerging within two to five years.
- And third, they’re betting that transformative AI will bring unprecedented benefits to humanity.
“This belief is expressed in hockey-stick graphs promising exponential rates of scientific advancement, financial returns, and ultimately human progress. I call this set of overlapping views the San Francisco Consensus.”
What’s the likely impact of transformative AI and how should we respond?
“TAI would be deeply disruptive to how we live and work together in society,” wrote the Volume 2 editors in their Introduction. “Its effects are likely to unfold in a variety of different ways, for good and for bad. It might, for instance, dramatically boost productivity and scientific progress, but at the same time disrupt labor markets and radically change the distribution of wealth and power.”
“With that scale of disruption in mind, we asked all contributors not only to better understand the consequences of TAI, but also to reflect on how we ought to respond. What changes are needed to our institutions, norms, or policy frameworks to ensure that all of us can flourish in a world with TAI? That practical dimension is essential — and a central focus of this volume.”
