“Transformative AI will generate a genius supply shock: abundant, cheap, and fast agents that can outperform human beings across many domains. But society is likely to adapt too slowly to this remarkable but unfamiliar new capability,” wrote University of Toronto professors Ajay Agrawal and Joshua S. Gans in the introduction to their essay, “Transformative AI and the Increase in Returns to Experimentation: Policy Implications.” Their essay, published in Volume 2 of The Digitalist Papers, explores policies “that can help companies, regulators, and individuals learn how to use these powerful new tools and put them to effective use.”
What Is the Genius Supply Shock?
The authors cite a few concrete examples of what they mean by a genius supply shock. “Humanity’s Last Exam” is a test developed over the past couple of years by a team of researchers who claim it’s the hardest test ever administered to an AI system. The exam includes roughly 3,000 multiple-choice and short-answer questions designed to test AI systems’ abilities in areas ranging from analytic philosophy to rocket engineering.
“Unlike traditional AI evaluations, which test for narrow capabilities in isolated tasks, this benchmark simulates the challenge of a PhD qualifying exam merged with a generalist’s oral defense,” wrote Agrawal and Gans. “Questions are long-form and open-ended. To score well, an AI must not only know, but understand. Success requires what we typically associate with our highest-functioning minds: flexible reasoning, conceptual abstraction, and the capacity to transfer knowledge across domains. Until recently, no machine had come close to passing.”
In 2024, researchers gave the Humanity’s Last Exam to six leading AI models. “All of them failed miserably,” said a NY Times article. The AI model with the highest score got only 8.3 percent correct answers. The percentage of correct answers improved significantly over the next two years, with a few models achieving accuracies above 40 percent, — putting them within striking distance of high-performing postgraduate scientists.
In their essay, Agrawal and Gans noted that the advances realized in 2025 “were anticipated by, among other experts, Dario Amodei, cofounder and CEO of AI foundation model company Anthropic, who described forthcoming systems as providing a country of geniuses in a datacenter.” Each such genius, Amodei, added, would be “smarter than a Nobel Prize winner across most relevant fields — biology, programming, math, engineering, writing, etc.” These systems do not simply automate routine tasks, he speculated; they synthesize knowledge across domains, propose and critique solutions, and do so at digital scale. Unlike human experts, they are cheap, abundant, and tireless.
What’s the Likely Economic Impact? Demand Adjusts Slowly
The central economic question is not whether AI will become extraordinarily capable, but whether institutions can absorb that capability quickly enough.
“The genius supply shock does not automatically translate into economic impact,” the authors wrote. “In the short run, organizations face fixed contracts, workflows, capital stocks, and, crucially, validation pipelines. When low-cost geniuses arrive before the capacity to test their proposals, much of the new problem-solving capacity sits idle.”
Firms must build complementary organizational capital — data infrastructure, evaluation frameworks, liability regimes — before they can deploy genius-level agents at scale. Importantly, they need to figure out precisely what these geniuses can do for them. In the past, such capabilities were scarce; now their application becomes heavily imagination-limited — that is, constrained not by intelligence, but by organizational creativity and institutional readiness.
The time lag between major technological advances and their broader economic impact was explained by economists Erik Brynjolfsson, Daniel Rock, and Chad Syverson in their 2018 article, “The Productivity J-Curve.” Major transformative technologies — like the steam engine, electricity, semiconductors, and the internet, — were the defining technologies of their respective ears. But, as history shows, there has generally been a significant time lag between the initial marketplace introduction of a transformative technology and its broader impact on industries, economies, and societies. While these technologies have enormous potential from the outset, realizing that potential requires major complementary investments, including business process redesign, innovative new products and business models, workforce reskilling, and a fundamental rethinking of the nature of production.
Taking advantage of genius-level AI will similarly require considerable experimentation and the development of concrete business use cases. “Solutions proposed by an AI may be persuasive on paper but fail in production because of data quality, governance, or human factors,” wrote Agrawal and Gans. “Organizations will need to run experiments —pilots, audits, and A/B tests — and they will need mechanisms to share lessons about what works and what does not.”
What Happens When Genius Becomes Abundant?
The arrival of abundant, near-zero-marginal-cost, high-level cognition will transform labor markets and the means of production. In the short run, the availability of genius-level AI may overwhelm demand. If firms have not reorganized tasks or invested in complementary infrastructure, the number of problem-solving activities that genius AIs can perform will be limited to a relatively small subset of leading-edge frontier problems. Consequently, genius-level AI intelligence may largely substitute for human geniuses while leaving routine knowledge workers temporarily less affected.
In the long run, if AI agents possess an absolute advantage across much of knowledge work, many tasks previously classified as “routine” may be reclassified as “frontier” or “genius” tasks. “The stock of frontier problems expands as the cost of solving them falls; tasks that were once considered too complex or unprofitable become tractable. The division of labor shifts toward more oversight, supervision, and creative exploration.”
Two principal mechanisms drive these predictions:
Task substitution. Genius AI agents can perform many steps in cognitive workflows more accurately and quickly than humans. In the short run, this substitution will be concentrated in problem-solving tasks where codified data exist and institutional constraints allow machine outputs to be acted upon.
Task reclassification. As the marginal cost of solving complex problems falls, firms will find it profitable to apply genius AI to domains previously left unaddressed because they were too difficult or too small in scale.
Potential Socially Inefficient Outcomes
In their final section, the authors outline three potential reasons why an abundance of genius AI might lead to socially inefficient outcomes:
Slow diffusion of information across the economy. Information may diffuse more slowly than is socially desirable if firms treat insights about which tasks should be reclassified as genius-level tasks as proprietary.
Restrictions on experimentation. As AI systems grow more capable, concerns about safety, misinformation, labor displacement, and alignment may intensify, potentially slowing experimentation.
Institutional inertia and administrative bottlenecks. In highly regulated sectors such as health care or finance, administrative backlogs can reduce effective demand for genius-level cognition while risk-averse administrators may default to delaying or denying applications.
The speed at which society resolves this mismatch between cognitive supply and institutional demand will shape both productivity growth and labor market disruption.
“Transformative AI promises a once-in-a-generation supply shock in cognitive capability,” wrote the essay’s authors in conclusion. “Genius-level AI will create value only when organizations have the capacity and incentives to test and adopt its outputs. Without intervention, private actors will underinvest in experimentation, and regulators will learn too slowly.”
“By embracing experimentation and incorporating feedback into evolving rules, policymakers can shorten the lag between the arrival of genius-level AI and its economic impact. The goal is not to pick winners or give the AI sector a free pass, but to create a regulatory environment in which experimentation is rewarded, learning is shared, and the productivity benefits of abundant cognition accrue to society as quickly as possible without excessively compromising safety.”
