Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

ABOUT

  • The January 31, 2006 issue of The Economist included a special focus on the mounting anxiety about the social consequences of AI, with four articles devoted to the subject. “Solving fiendish maths problems, making complex medical diagnoses, conjuring up new software in moments: the feats of generative AI get more impressive by the day,” noted the issue’s lead article, urging readers to “Stop panicking about AI. Start preparing.”

    While the future course of AI is obviously uncertain, there are good reasons to believe that society has time to prepare and adapt. “It takes time for a new technology to diffuse from the cutting edge to the office cubicle,” the article noted. Firms and governments should use this breathing space to help those most at risk of displacement.

    “So far labour markets seem unruffled. Service jobs are most exposed to generative AI, yet in America the number of white-collar jobs has gone up by 3m since ChatGPT was launched, while blue-collar jobs have stayed flat. Employment has risen even in areas that have been keen adopters, such as coding.” One reason for the slow economic impact is that while AI excels at some tasks, it also “confidently spouts nonsense, or struggles to count the number of ‘r’s in ‘strawberry.’” This unpredictability means that companies and workers need time to figure out where, and how, to apply AI effectively.

    “Moreover, business processes don’t change overnight. Electricity was first harnessed commercially in the 1880s, but it took 40–50 years to generate productivity gains on factory floors. Plants had to be redesigned and workflows rethought.” This time, too, companies must think carefully about how to encourage workers to use AI, how to mitigate its shortcomings, and how to deploy it successfully.

    Realizing the potential of a general-purpose technology (GPT) — like steam power, digital computers, and now AI — requires large investments and a fundamental rethinking of how production is organized. It takes considerable time for these technologies and new business models to be widely deployed across economies and for their full benefits to be realized. (more…)

    , , , ,
  • “Chief executives of some of the world’s largest companies are all-in on artificial intelligence, though many haven’t yet seen meaningful returns on their investments,” said the WSJ in a recent article, “CEOs to Keep Spending on AI, Despite Spotty Returns.” The article is based on the 2026 annual survey of Teneo, a global CEO consulting and advisory firm. “After a year in which trillions of dollars worth of AI investments buoyed global markets and the economy, 68% of CEOs plan to spend even more on AI in 2026,” according to the Teneo survey of more than 350 public-company CEOs.

    “Teneo also surveyed about 400 institutional investors, of which 53% expect that AI initiatives would begin to deliver returns on investments within six months,” the WSJ article added. “That compares to the 84% of CEOs of large companies — those with revenue of $10 billion or more — who believe it will take more than six months.”

    Teneo’s Vision 2026 survey was conducted from mid-October to mid-November, 2025. Let me summarize its key findings.

    (more…)

    , , , , , , ,
  • “Transformative AI will generate a genius supply shock: abundant, cheap, and fast agents that can outperform human beings across many domains. But society is likely to adapt too slowly to this remarkable but unfamiliar new capability,” wrote University of Toronto professors Ajay Agrawal and Joshua S. Gans in the introduction to their essay, “Transformative AI and the Increase in Returns to Experimentation: Policy Implications.” Their essay, published in Volume 2 of The Digitalist Papers, explores policies “that can help companies, regulators, and individuals learn how to use these powerful new tools and put them to effective use.”

    What Is the Genius Supply Shock?

    The authors cite a few concrete examples of what they mean by a genius supply shock. “Humanity’s Last Exam” is a test developed over the past couple of years by a team of researchers who claim it’s the hardest test ever administered to an AI system. The exam includes roughly 3,000 multiple-choice and short-answer questions designed to test AI systems’ abilities in areas ranging from analytic philosophy to rocket engineering.

    “Unlike traditional AI evaluations, which test for narrow capabilities in isolated tasks, this benchmark simulates the challenge of a PhD qualifying exam merged with a generalist’s oral defense,” wrote Agrawal and Gans. “Questions are long-form and open-ended. To score well, an AI must not only know, but understand. Success requires what we typically associate with our highest-functioning minds: flexible reasoning, conceptual abstraction, and the capacity to transfer knowledge across domains. Until recently, no machine had come close to passing.”

    In 2024, researchers gave the Humanity’s Last Exam to six leading AI models. “All of them failed miserably,” said a NY Times article. The AI model with the highest score got only 8.3 percent correct answers. The percentage of correct answers improved significantly over the next two years, with a few models achieving accuracies above 40 percent, — putting them within striking distance of high-performing postgraduate scientists.

    In their essay, Agrawal and Gans noted that the advances realized in 2025 “were anticipated by, among other experts, Dario Amodei, cofounder and CEO of AI foundation model company Anthropic, who described forthcoming systems as providing a country of geniuses in a datacenter.” Each such genius, Amodei, added, would be “smarter than a Nobel Prize winner across most relevant fields — biology, programming, math, engineering, writing, etc.” These systems do not simply automate routine tasks, he speculated; they synthesize knowledge across domains, propose and critique solutions, and do so at digital scale. Unlike human experts, they are cheap, abundant, and tireless. (more…)

    , , , , , , , ,
  • The Digitalist Papers is a series of essays that aim to offer insights on the “possible futures that the AI revolution might produce.” The first volume, released in September 2024, included twelve essays  that explored the intersection of AI and democracy in America. Volume 2, released in December 2025, includes 21 essays that examined the economics of transformative AI.

    Volume 2 was edited by Daniel Susskind, Erik Brynjolfsson, Anton Korinek, Alex Pentland, and Ajay Agrawal. “AI capabilities have been relentlessly improving across a variety of benchmarks. And in November 2022, a new chapter in this story began,” they wrote in the Volume 2 Introduction. While Volume 1 explored the implications of AI for democracy, “this second volume turns to its economic dimensions, focusing on how transformative AI might alter production, work, and prosperity itself. It brings together leading economists, technologists, philosophers, and others to explore the key economic challenges and opportunities of ‘transformative AI,’ or TAI. Most significantly, though, it is also focused on how we might respond to whatever lies ahead.”

    “In what follows, we define ‘TAI’ as advanced AI systems that will usher in an economic transformation comparable to the Industrial Revolution, but on a much shorter timescale,” they wrote. “They exhibit intelligence surpassing the most capable humans across multiple fields, capable of performing not only routine cognitive work, but also complex tasks like solving mathematical theorems, creating novel works, or directing experiments autonomously.”

    AI’s technological progress is often described using terms like artificial general intelligence (AGI) and artificial superintelligence (ASI) — that is, AI systems that will eventually match or surpass human capabilities across virtually all cognitive tasks. Transformative AI (TAI), by contrast, reflects a growing consensus in policy circles that even if AI does not fully reach human-level cognition, it could still have an impact on society comparable to the agricultural or industrial revolutions. (more…)

    , , , , , ,
  • A few weeks ago, New York Times columnist David Leonhardt hosted an online conversation about the impact  that AI is already having on employment and how large a transition society may be facing with three prominent economists: MIT’s David Autor, the University of Virginia’s Anton Korinek, and Yale University’s Natasha Sarin. I found their discussion quite interesting at multiple levels. Let me summarize some of their key points.

    The Near-Term Impact of AI on Jobs

    “Before we look toward the future, let’s talk about the present,” Leonhardt began. “There is debate among economists about whether A.I. has already led to a meaningful amount of job loss. What do you each think?”

    “The evidence is inconclusive,” said Professor Autor. Some widely discussed findings suggest that entry-level employment for young workers has declined in AI-exposed occupations such as software development and customer service. However, other recent business-cycle factors — such as tariffs and interest rates — may also be influencing hiring trends. “That said,” he added, “there’s every reason to believe that advancing A.I. will fundamentally change hiring and skill requirements across much of the economy. In many cases, I think we’ll see fewer people doing this work, and those who do it will be more expert, solving the thorny problems that A.I. currently cannot solve on its own.”

    Professor Sarin also noted that the evidence is inconclusive. That “despite all the vibes and anecdotes you hear about A.I. labor market displacement, there just isn’t evidence in the data that this has happened in a meaningful way so far. … We don’t find differences in employment in the last few years between the occupations most exposed to A.I. and those least exposed.” That, she noted, should not be surprising. “It’s been only three years since the mass introduction of this technology, and it takes firms — and all of us — time to understand how to deploy it in ways that are truly transformative.”

    Professor Korinek offered a different lens. While employment data may be ambiguous, he argued, investment data is not. “The leading A.I. labs aren’t making hundred-billion-dollar bets because they expect A.I. to have minor effects on the labor market. They are betting on achieving artificial general intelligence (A.G.I.), which could substitute for human labor across much of the economy.” He also mentioned that few people work at these labs relative to the scale of investment. “The employment effects we are looking for may simply be lagging indicators of a transformation that’s already locked in by the capital being deployed.” (more…)

    , , , , , ,
  • “The COVID-19 pandemic forced organizations to reimagine work in ways few had previously considered,” said “Remote-First Organizations: Practices that Drive Talent, Trust, and Performance,” a recently published report by the Institute for Global Responsibility (i4cp) in its Introduction. “What began as an emergency response has since become a long-term operating model for many. Remote and flexible work models are no longer fringe benefits or stopgap solutions — they are strategic choices shaping the future of work.”

    Work from home (WFH) has been around for decades. The share of people working from home three or more days per week was under 1% in 1980, growing modestly with the rise of the internet to around 4% in 2018. Then came Covid-19, forcing tens of millions around the world to work from home and triggering a mass workplace experiment that broke through the technological and cultural barriers that had prevented WFH adoption in the past.

    Since May of 2020, economists Jose Maria Barrero (Instituto Tecnológico Autónomo de México), Nicholas Bloom (Stanford University), and Stephen J. Davis (University of Chicago) have been conducting monthly surveys to track the evolution of WFH. I’ve been keeping up with the evolution of WFH by following their monthly surveys. One of their first surveys found that the percentage of paid full days worked from home once COVID hit in April of 2020 was 61.4%, a huge increase from their 4.8% WFH estimate just before COVID.

    The percentage started to decline in subsequent months. One year later WFH was around 45%. About two years later, the June 2023 survey found that the percentage of paid full days worked from home was around 28%. For the past three years, the percentage of WFH days for all workers has fluctuated around 2.5 days. And, according to their latest survey, around 61% of all full-time employees were full-time onsite, 13% were fully remote; and 26% were in a hybrid arrangement. (more…)

    , , ,
  • “There is significant interest in the development and application of foundation models for scientific discovery,” said Foundation Models for Scientific Discovery and Innovation,” a recent report from the National Academies. “Foundation models possess the capacity to generate outputs or findings and discern patterns within extensive data sets with data volumes that are considered overwhelming for classical modes of inquiry. Efforts are under way to use these models to accelerate various aspects of scientific work flows (including streamlining literature reviews, planning experiments, data analysis, and code development) and generating novel findings and hypotheses that can then spur further research directions. However, significant challenges remain in the effective use of these models in scientific applications, including issues with flawed or limited training data and limited verification, validation, and uncertainty quantification capabilities.”

    High performance computing has been a major part of my education and subsequent Pdcareer. In the late 1960s I was doing atomic and molecular calculations as a PhD physics student at the University of Chicago. Then in the early 1990s, I was the general manager of IBM’s new Scalable Powerparallel (SP) family of parallel supercomputers.

    The advances of supercomputers over the past several decades have been remarkable.  The machines I used as a graduate student in the 1960s probably had a peak performance of a few million floating  point calculations per second (megaflops). Every year since 1993, the TOP500 project has been publishing a list of the 500 most powerful supercomputers in the world. In the latest such list, the fastest supercomputer surpassed 1.8  billion billion floating point calculation per second (exaflops).

    AI is now taking high performance computing to a whole new level of capabilities. A September, 2023 issue of The Economist, How AI Can Revolutionize Science,” included a number of articles on the impact of AI on scientific discovery. “Debate about artificial intelligence (AI) tends to focus on its potential dangers: algorithmic bias and discrimination, the mass destruction of jobs and even, some say, the extinction of humanity,” noted the issue’s lead article. “As some observers fret about these dystopian scenarios, however, others are focusing on the potential rewards. AI could, they claim, help humanity solve some of its biggest and thorniest problems. And, they say, AI will do this in a very specific way: by radically accelerating the pace of scientific discovery, especially in areas such as medicine, climate science and green technology.” (more…)

    , , , , , ,
  • “Even as some instructors remain fervently opposed to chatbots, other writing and English professors are trying to improve them,” observed a recent New York Times article, “AI Is Coming to Class.” At the heart of the article is a debate now unfolding across higher education: whether— and how — university students should be taught to properly use generative AI.

    The article illustrates this debate through the first-year writing program at Barnard College, which generally bans generative AI tools such as ChatGPT, Claude, and Gemini — systems that can readily draft paragraphs, conduct research, and compose essays. The program’s policy warns students that AI tools are “often factually wrong” and “deeply problematic,” perpetuating misogyny as well as racial and cultural biases.

    Yet the program has made an exception for Benjamin Breyer, a senior lecturer in Barnard’s English Department, who is determined to see whether AI can supplement, rather than short-circuit, students’ efforts to learn academic writing. In doing so, Breyer represents a growing group of faculty who are experimenting with how AI might be used constructively — even as many of their colleagues remain firmly opposed.

    I am particularly interested in this debate because it strongly echoes my own early experiences with computers as a physics student in the 1960s — a time when the legitimacy of using machines as intellectual tools was also very much in question. Let me explain. (more…)

    , , , , ,
  • “Americans have grown sour on one of the longtime key ingredients of the American dream,” said a recent NBC News article. “Almost two-thirds of registered voters say that a four-year college degree isn’t worth the cost, according to a new NBC News poll, a dramatic decline over the last decade. Just 33% agree a four-year college degree is ‘worth the cost because people have a better chance to get a good job and earn more money over their lifetime,’ while 63% agree more with the concept that it’s ‘not worth the cost because people often graduate without specific job skills and with a large amount of debt to pay off.’”

    The poll is based on interviews with roughly 1,000 registered voters, — 70% via cell phones and 30% via text. 48% of the interview subjects were male, 52% female. 25% were high school graduates, 34% had some college or vocational training beyond high school, 21% had college degrees, and 17% had graduate or professional degrees. About 30% were between 18 and 39 years old, 35% between 40 and 59, 25% between 60 and 74, and 10% over 75 years old.

    In 2013, 53% of adults surveyed said that a college degree was worth the costs compared to the 33% that said so in 2025. “The eye-popping shift over the last 12 years comes against the backdrop of several major trends shaping the job market and the education world, from exploding college tuition prices to rapid changes in the modern economy — which seems once again poised for radical transformation alongside advances in AI.”

    As has been true for years, data from the US Bureau of Labor Statistics (BLS) has continuously showed that those with advanced degrees earn more and have lower unemployment rates than those with lower levels of education. (more…)

    , , , , , , , ,
  • “The proliferation of generative artificial intelligence (AI) has sparked a global debate about its potential impact on the labor market,” said a recent article, “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence,” by Stanford Digital Economy Lab (SDEL) Director Erik Brynjolfsson, Postdoctoral Fellow Bharat Chandar, and Research Scientist Ruyu Chen in its Introduction. “This discourse, across academia, public policy, business, and popular media, spans utopian predictions of enhanced productivity, dystopian fears of widespread job displacement, and skeptical views that AI will have minimal effects on employment or productivity.”

    Over the past two centuries, there’ve been periodic fears about the impact of technology-based automation on jobs. In the 1810s, for example, the so-called Luddites smashed the new machines that were threatening their textile jobs. But each time those fears arose in the past, technology advances ended up creating more jobs over the ensuing decades than they destroyed.

    Automation anxieties have understandably accelerated in recent years, as AI-based innovations are now being applied to activities requiring cognitive capabilities that not long ago were viewed as the exclusive domain of humans. The concerns surrounding AI’s long term impact on jobs may well be in a class by themselves.

    There’s a broad consensus that AI will have a major impact on jobs and the very nature of work, but it’s much less clear what that impact will be. Will AI play out like past technology innovations, — highly disruptive in the near term, but ultimately leading to the creation of new jobs, whole new industries, and a rising standard of living? Or will this time be different, as AI-based innovations end up replacing a large portion of the workforce, — leading to mass unemployment, economic dislocations and social unrest? (more…)

    , , , , , , ,