Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

ABOUT

Subscribe to this blog via email

Executives have long relied on simple categories to frame how technology fits into organizations: tools automate tasks, people make decisions, and strategy determines how the two work together. But the rise of agentic AI is beginning to blur these familiar distinctions. As noted in The Emerging Agentic Enterprise: How Leaders Must Navigate a New Age of AI, a November 2025 report by the MIT Sloan Management Review in collaboration with the Boston Consulting Group, a new class of systems is emerging that does not fit neatly into traditional management frameworks. The study, conducted by Sam Ransbotham, David Kiron, Shervin Khodabandeh, Sesh Iyer, and Amartya Das, explores how organizations are beginning to grapple with this new reality.

“That framing is no longer sufficient,” the authors wrote. “A new class of systems — agentic AI — complicates these boundaries. These systems can plan, act, and learn on their own. They are not just tools to be operated or assistants waiting for instructions. Increasingly, they behave like autonomous teammates, capable of executing multistep processes and adapting as they go.” Notably, 76% of respondents to their global executive survey said they view agentic AI as more like a coworker than a tool.

Agentic AI introduces a fundamentally new category: systems that behave partly like software tools and partly like human coworkers. Traditional management logic assumes that technology either substitutes or complements labor, automates or augments tasks, functions as labor or capital, or behaves as a tool or a worker — but not all at once. Organizations now face an unprecedented challenge: managing a single system that requires both human resource approaches and asset management techniques.

This dual nature creates a set of tensions that traditional management frameworks were never designed to address. “Technology executives focus on technical issues, making pilot, vendor, or infrastructure decisions. IT leaders want predictable, scalable systems with clear technical specifications. Business executives focus on markets, competition, and people. CFOs need investment models with measurable returns and depreciation schedules. HR executives require performance management frameworks and supervision protocols. Business leaders demand both efficiency and adaptability from the same system.”

Addressing these competing demands requires understanding four operational tensions that expose the inadequacy of traditional management approaches and then redesigning fundamental processes — work design, governance, workforce planning, learning, and investment — to work with, rather than against, agentic AI’s inherent duality. Let me summarize a few of the report’s recommendations on how companies should deal with agentic AI’s dual nature.

Strategic Tensions When Adopting Agentic AI

Successfully navigating this challenge requires that leaders manage four distinct tensions:

  • The Flexibility Tension: Tools scale predictably; workers adapt dynamically.
  • The Investment Tension: When, and how, should organizations invest in agentic systems?
  • The Control Tension: How do you supervise something designed to work autonomously?
  • The Scope Tension: When, and by how much, should organizations change processes?

The Flexibility Tension: Scalability Versus Adaptability. “Human workers are maximally flexible. They can switch tasks, learn new skills, and adapt to unexpected situations with minimal retraining. Tools are much less flexible. Machinery and infrastructure excel at specific purposes and scale predictably, but they can struggle to adapt to change. Agentic AI sits in between: more adaptable than tools but (currently) less flexible than workers.”

How should organizations design processes around systems with intermediate flexibility? Organizations that optimize solely for AI efficiency risk missing out on AI’s human-like ability to adapt to system failures or unexpected market shifts. Those that strike the right balance can achieve both AI-driven efficiency and human-level adaptability.

The Investment Tension: Experience Versus Expediency. “Traditional tools require large upfront costs but deliver predictable returns through established depreciation schedules. Human workers are an ongoing variable expense, but their value appreciates with experience and training. Agentic AI defies both models, requiring substantial initial development costs as well as ongoing variable costs, such as training models on new data.”

Should investments in agentic AI be viewed more like investments in tools, in workers, or in both? Organizations that apply traditional tool investment frameworks to agentic AI risk underinvesting in continuous learning and adaptation. Those that adopt hybrid investment models may create compounding returns as their systems learn, adapt, and generate new capabilities across multiple business contexts.

The Control Tension: Supervision Versus Autonomy. “Tools are fully owned and controlled, behaving predictably once deployed. Workers must be managed through contracts, incentives, and oversight because humans have autonomy and may pursue divergent goals. Agentic AI requires supervision and management like a worker does because its outputs can be unpredictable, even though organizations own it like a tool.”

How can organizations design processes to effectively supervise an agent that also works autonomously? Organizations that fail to develop appropriate supervision for AI agents may face compliance failures or runaway systems that damage business operations. Those that learn how to manage artificial colleagues can leverage the human-like capabilities of agentic tools without the traditional constraints of human hiring, training, and retention.

The Scope Tension: Retrofit Versus Reengineer. “Agentic AI presents leaders with a critical resource-allocation choice, now complicated by the rapid pace of technological change. The decision is whether to retrofit AI into existing workflows for quick, incremental gains or to reengineer processes for more transformative but slower results.”

Retrofitting will likely deliver faster returns with current technology but may limit agentic AI to incremental improvements rather than transformative opportunities. Reengineering processes may create new competitive capabilities but requires a significant commitment of resources over a longer time horizon.

A Strategic Overhaul of Workflows, Governance, Roles, and Investment

“The tensions created by agentic AI’s dual nature demand a strategic response that goes beyond incremental adjustment. Unlike technologies that can be managed within traditional functional silos, agentic AI cuts across internal organizational boundaries. … None of the four fundamental tensions can be resolved by any single function acting alone; each requires new forms of executive collaboration that transcend the departmental boundaries that have defined organizational structures since the beginning of industrialization.”

Once executives have defined the values they want from agentic AI, they must address five interlocking implications to move from awareness to action:

1. Redesigning Work: Move Beyond Incrementalism. Agentic systems do not simply make existing steps proceed faster; they invite leaders to rethink the design of entire workflows, blending human judgment and machine autonomy in ways that legacy processes were never designed to accommodate.

2. Governance and Decision Rights: Making Decisions and Setting Rules. Agentic AI creates a governance dilemma unlike that posed by previous technologies. Tools are owned and predictable, while people are autonomous and must be supervised. Agentic systems fall somewhere in between: they are owned like assets but act in ways that require oversight, similar to employees. The key question for managers becomes: How do we assign decision rights, accountability, and oversight to actors we own but do not fully control?

3. Organizational Structure and Strategic Workforce Planning: Redefining Roles, Not Just Skills. Traditional organizational design has been built around human workers — spans of control, management layers, and career paths all structured around human effort. Agentic systems challenge this logic. Executives must now consider not only what skills they need but also what organizational structures should look like when humans and AI agents work side by side.

4. Upskilling, Learning Loops, and Life-Cycle Management: Building Human and Agent Capacity. Agentic AI reshapes how organizations approach learning and development on two fronts. Employees must develop new skills in supervising, critiquing, and orchestrating AI systems. At the same time, AI agents require life-cycle management — onboarding, training, retraining, and eventual retirement — to remain reliable and effective. Leaders should continually ask: How are we ensuring that both our people and our agents continue to learn and improve?

5. Investment Strategy: Budgeting for Permanent Uncertainty. Traditional investment frameworks assume clear distinctions between capital and operational expenditures, short- and long-term returns, or centralized and distributed spending. Because agentic AI cuts across these categories, organizations must develop financial architectures capable of supporting multiple investment approaches simultaneously.

Conclusion

“With agentic AI, leaders are managing a new entity without historical precedent,” the authors conclude. “It is a tool that learns, a worker that is owned, and an investment that behaves like both tool and worker. Agentic AI does not fit neatly within the substitute-or-complement framework because it must be managed as both a worker and a tool at once.”

Agentic AI forces a tough, unsettling question: “How do we manage artificial colleagues that we own like equipment but must supervise like people, and that depreciate like machinery but learn like humans?”

“Ultimately, this challenge is deeply human and requires that businesses break down organizational silos. The era of managing technology solely within the IT department is over; governance is now a mandatory cross-functional effort where IT, HR, finance, and operations must collaborate on a unified framework. Agentic AI elevates human judgment rather than eliminating it. Strategic oversight, ethical governance, and the ability to orchestrate human-AI teams become the most critical human skills as AI agents handle tasks previously performed by people.”

Managing agentic AI will therefore require not just new technologies but new management principles. The organizations that thrive will be those that focus less on the technology itself and more on the human systems that surround it.

Posted in , , , , , , ,

Leave a Reply

Discover more from Irving Wladawsky-Berger

Subscribe now to keep reading and get access to the full archive.

Continue reading