Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

ABOUT

Subscribe to this blog via email

“Artificial intelligence is reshaping economic systems at a pace we have rarely seen in modern technological history,” wrote Frank Nagle in “Revealing the Hidden Economics of Open Models in the AI Era,” a blog posted in November of 2025. Nagle is Advising Chief Economist at the Linux Foundation and a Research Scientists at MIT’s Initiative on the Digital Economy. “Every sector — from finance to healthcare to manufacturing — is scrambling to understand how to harness AI safely, efficiently, and competitively,” he added. “Yet amid the excitement, a crucial part of the story has been missing. Specifically, understanding the role that open models play in the AI economy, and how much value is being left on the table when organizations overlook open alternatives, are two topics requiring a closer look.”

In a new working paper, “The Latent Role of Open Models in the AI Economy,” Nagle and Georgie Tech professor Daniel Yue probe these questions by analyzing a comprehensive data set of AI model usage, prices, and performance. “The findings surprised even us — and they carry major implications for the Linux Foundation community and the global open source ecosystem to the tune of billions of dollars of possible savings on AI expenditure.”

Their research finds that closed models dominate the AI economy despite higher prices and modest performance benefits. In particular:

  • Closed models dominate the LLM inference economy. Closed models account for 80% of AI token usage and more than 95% of revenue, despite prices that are,  on average, six times higher than those of open models, and only modest performance advantages.
  • Open models are rapidly matching  the performance of closed models. While closed models retain a small, —roughly 10% —  performance edge on key benchmarks, open models consistently reach parity with frontier closed models within three to six months.
  • Open models benefit from significantly lower prices. Open model prices are approximately 84% lower than those of closed models, driven by competition among the many third-party providers able to offer them.

These findings raise a striking economic puzzle: “if open models offer comparable performance at substantially lower prices, why do closed models continue to dominate?

“This systematic underutilization is economically significant: reallocating demand from observably dominated closed models to superior open models would reduce average prices by over 70% and, when extrapolated to the total market, generate an estimated $24.8 billion in additional consumer savings across 2025.”

“In aggregate, perhaps the most striking finding from our analysis is not the value that open models currently create, but rather the far larger value they could create if users made (potentially) more efficient choices.”

These results suggest that the dominance of proprietary models may be driven by forces beyond model capabilities and price. Two potential explanations merit consideration.

First, unobserved consumer preferences that are unseen by the researcher but known by the consumer:

  • Standardized benchmarks may not capture quality dimensions that matter in production, — such as safety, alignment, style, and hallucinations, — where closed  models may still have an advantage.
  • Migration costs from workflows adapted to specific model behaviors, prompt optimization, or proprietary features.
  • Established providers offer legal recourse and accountability that may be absent with smaller, open inference providers.
  • Security concerns, including data exfiltration risks from foreign-trained models or untrusted inference providers.

Second, information frictions that could potentially be mitigated through policy or education:

  • Lack of awareness of cheaper and higher performance alternatives leads to use of outdated models, possibly due to the rapid pace of change.
  • Institutional conservatism that favors established vendors. In the 1990s, the refrain was “no one ever got fired for buying IBM or Microsoft.” In 2025, it may well be “no one ever got fired for buying OpenAI or Anthropic.” 
  • Beliefs that using open models exposes proprietary data to competitors or the public.
  • Data privacy and other security concerns related to the use of foreign models. 

In conclusion, will open source turn out to be as important in the emerging AI era as it has been over the past few decades?

In the foreword to “The State of Commercial Open Source in 2025,” an August, 2025 report by Linux Foundation Research, Nagle wrote: “For years, questions have lingered over whether open source companies could deliver the kind of growth, defensibility, and returns that venture capital demands. Though research (including my own) has shown the importance of open source to startups, these questions about companies that have open source at the core of their business model remain,”.

“Drawing on a quarter century of venture data, the picture that emerges is striking,” he added. “Companies built on open source foundations routinely achieve higher valuations at exit — in some cases, several multiples greater than those of closed-source peers. … The future of software will be shaped in the open. By aligning the incentives of capital and community, we have the opportunity to create technologies that are not only commercially successful, but also resilient, widely adopted, and deeply collaborative.”

Similarly, Nagle concludes his aforementioned blog, by reminding us of the value of open models in the AI era:

Open source continues to create massive, underrecognized value. “In addition to our main results, we also find that if open models disappeared, consumers would need to spend between $350 million to $1.23 billion more than they currently are on LLM inference. Although this is an order of magnitude less than the potential unrealized value, it echoes decades of research showing how open source software quietly creates trillions in economic value through cost reductions, complementarities, and innovation spillovers. Open model AI appears to follow the same pattern — but at a much faster cadence.”

Today’s underutilization mirrors early open source adoption patterns. “Just as enterprises once hesitated to adopt Linux or Apache due to uncertainty or risk aversion, organizations today hesitate to adopt open AI models, even when doing so is rational from a cost–performance standpoint. This reinforces the need for the Linux Foundation’s convening power to educate the market, provide governance assurance, support neutral benchmarking, and build trusted, community-driven infrastructure around open models.”

The future of AI will be hybrid — and openness will be essential. “Closed models will continue to play a crucial role. But open models are emerging as the competitive floor that disciplines pricing, accelerates innovation, and democratizes access. That competitive tension is healthy for the entire ecosystem.”

Posted in , , , , , , ,

Leave a Reply

Discover more from Irving Wladawsky-Berger

Subscribe now to keep reading and get access to the full archive.

Continue reading