Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

ABOUT

Subscribe to this blog via email

In a recent Harvard Business Review (HBR) article, When Using AI Leads to ‘Brain Fry,’” participants in a research study conducted by the Boston Consulting Group described experiencing “a mental fog with difficulty focusing, slower decision-making, and headaches” due to excessive use or oversight of AI tools beyond their cognitive capacity. The article’s authors — Julie Bedard, Matthew Kropp, Megan Hsu, Olivia T. Karaman, Jason Hawes, and Gabriella Rosen Kellerman — named this condition “AI brain fry.”

Their findings are based on a survey of 1,488 full-time U.S.-based workers (48% male, 51% female) at large companies across a variety of industries, roles, and levels. The survey asked about patterns and intensity of AI use, work experiences, and cognitive and emotional effects.

The authors concluded  that AI brain fry arises not from the technology itself, but from the “excessive use or oversight of AI tools beyond one’s cognitive capacity.”

“This problem is becoming more common,” the authors wrote. “As enterprises use more multi-agent systems, employees find themselves toggling among more tools. Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become defining features of working with AI.” Unsurprisingly, workers are running up against the limits of their cognitive capacity. In recent weeks, online AI users have described increased cognitive load, “saturated” attention, and mental fatigue in social media posts.

The study’s findings brought to mind a recent article, AI Tools Make Design Skills More Important than Ever,” by Carnegie Mellon University (CMU) professors Mary Shaw, Michael Hilton, and George Fairbanks, where they wrote that “Generative AI (GenAI) and other automated tools are increasingly handling the routine nuts and bolts of creating code. To use them effectively, however, you need to know precisely what you want the tools to generate.” This requires knowledgeable human supervision in order to:

  • specify what you really want (often it is fuzzy),
  • determine whether generated code does what you specified (often it does not),
  • judge the quality of the code (often it is poor),
  • repair AI-generated code if it’s defective (often it is), and
  • decide whether what you specified actually reflects what you intended (often it does not).

Shaw, Hilton, and Fairbanks further explained that “the responsibilities of software developers are rapidly becoming more about designing and less about programming. Reading, understanding, evaluating, and repairing someone else’s code is now more important than writing code from scratch. Judging whether you requested the right thing looms larger when the code is not written by you. Attempting to develop complex software without these skills can put a programmer in the position of the Sorcerer’s Apprentice—able to invoke the technology but lacking the skills to control it.”

In a related blog, How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt,” University of Victoria computer science professor Margaret-Anne Storey begins by defining and contrasting these two concepts.

“The term technical debt is often used to refer to the accumulation of design or implementation choices that later make the software harder and more costly to understand, modify, or extend over time. Technical debt nicely captures that ‘human understanding’ also matters, but the term tends to suggest that the accrued debt is a property of the code, and that effort should be spent removing that debt from the code.”

“Cognitive debt, a term gaining traction recently, instead communicates that the debt accumulated from moving fast resides in the minds of developers and affects their ability to understand, modify, and extend systems. Even if AI agents produce code that is relatively easy to understand, the humans involved may have simply lost the plot — no longer understanding what the program is supposed to do, how their intentions were implemented, or how to change it.”

“Cognitive debt is likely a much bigger threat than technical debt as AI and agents are adopted,” Storey added, reminding us that a program is more than its source code. “Rather, a program is a theory that lives in the minds of the developer(s), capturing what the program does, how developer intentions are implemented, and how the program can be changed over time. Usually, this theory is not just in the mind of one developer but is distributed across the minds of many, if not thousands, of others.”

Storey illustrates what she means by losing the plot of a program with a concrete example from an entrepreneurship course she recently taught. The course was organized into student teams, each tasked with building a software product over the semester. But eight weeks in, one team hit a wall: “They could no longer make even simple changes without breaking something unexpected.”

The team initially blamed technical debt: messy code, poor architecture, hurried implementations. But as they dug deeper, the real problem emerged: “No one on the team could explain why certain design decisions had been made or how different parts of the system were supposed to work together.” While the code may have been messy, the bigger issue was that the team had lost the plot — their shared understanding of how the system was supposed to work had fragmented or disappeared entirely. “They had accumulated cognitive debt faster than technical debt, and it paralyzed them.”

“This dynamic echoes a classic lesson from Fred Brooks Mythical Man-Month, noted Storey. “Adding more agents to a project may increase coordination overhead, invisible decisions, and cognitive load. Of course, agents can also be used to manage cognitive load by summarizing what changes have been made and how, but the core constraints of human memory and working capacity will be stretched by the push for speed at all costs.”

“There’s a huge range in how workers use AI today,” noted the Brain Fry HBR article. “There’s variation in the number of tools used at once, the degree to which AI replaces work versus augments it, the level of oversight required, and whether AI has increased or decreased overall workload. Workers may use search agents, research agents, data analysis tools, image generation or design tools, or coding agents.”

To better understand the mental impact of different uses of AI, the authors asked participants which forms of engagement they found draining and exhausting rather than exciting and energizing, and came up with three key insights:

  • Degree of AI oversight. The most mentally taxing form of AI engagement was the extent to which tools required continuous monitoring.
  • Increased workload. A second predictor of cognitive load and mental fatigue was the degree to which AI tools increased overall workload.
  • Number of AI tools used simultaneously. Productivity initially increased as employees used multiple AI tools, but after three tools, productivity declined. “Multitasking is notoriously unproductive, and yet we fall for its allure time and again.”

“AI can help employees work faster, think bigger, and innovate more,” the authors concluded. “At the same time, it can produce cognitive overload, with significant personal and business consequences. Our findings suggest that the difference lies not in how much AI is used, but in how workers, teams, leaders, and organizations shape its use.”

“AI brain fry reveals just how quickly and powerfully these new tools can affect our cognitive capacities. The challenge now is to harness that same power to achieve positive human and business outcomes.”

To help do so, their article concludes with five lessons for leaders:

Redesign jobs, work, and tools holistically for human–AI responsibility. “AI oversight cannot simply be layered on top of human oversight, nor can AI agents be stacked indefinitely on a single user. Just as we have norms for spans of control in managing people, limits must be defined for huma + agent oversight and for agents alone.”

Set explicit expectations about AI and workload. “When organizations celebrate ‘productivity gains’ without clarifying workload implications, employees interpret this as work intensification. That ambiguity alone can increase stress. Leaders reduce strain when they clearly define AI’s purpose in the organization.”

Shift metrics from activity — and intensity — to impact. “Start from clear strategic objectives with measurable outcomes. Exercise caution in responding to efficiency gains. Don’t rush to backfill work recently automated by an ingenious worker; doing so immediately will feel punitive and disincentivize further innovation.”

Develop worker skills for managing AI workloads. Some individuals are “working harder to manage the tools than to actually solve the problem.” In our work with software developers, we’ve found that those most advanced in using AI can feel blocked unless they develop critical new skills such as problem framing, analysis planning, and strategic prioritization.

Treat human attention as a scarce and valuable resource. “Some of the most valuable human skills today, including discernment, decision making, and strategic thinking, require sustained attention. While burnout is widely recognized, mental fatigue is more likely to go undetected. Organizations should evolve their people analytics to monitor cognitive load and treat mental fatigue from AI use as a new kind of workplace risk.”

Posted in , , , , ,

One response to “What Causes AI Brain Fry?”

  1. Suryakant Shrirang Bansude Avatar

    This is a sharp observation — and I think it points to something deeper.

    What’s being described as “AI brain fry” isn’t just tool overload.

    It’s what happens when:
    👉 we increase *decision volume* without improving *decision structure*

    AI doesn’t remove thinking — it shifts the burden to:
    – supervising outputs
    – evaluating alternatives
    – deciding what to trust

    But those decisions are rarely made explicit.

    So people end up holding:
    more options, more scenarios, more ambiguity — all in their head.

    That’s where the fatigue comes from.

    In that sense, this isn’t just a human limitation problem.

    It’s a design gap:
    👉 we’ve scaled intelligence, but not the way decisions are framed, tested, and traced.

    Curious how others are seeing this — especially in planning / operations contexts.

Leave a Reply

Discover more from Irving Wladawsky-Berger

Subscribe now to keep reading and get access to the full archive.

Continue reading