A few weeks ago, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) released the 2026 AI Index Report, its ninth annual analysis of the impact, progress, and trends of AI. Led by an interdisciplinary group of experts from across academia and industry, the AI Index offers one of the most comprehensive, data-driven views of the continuing evolution of artificial intelligence.
“A year ago, this report documented AI’s arrival as a mainstream force,” wrote the report’s co-directors Yolanda Gil and Raymond Perrault. “This year’s data shows what happens after arrival.”
“This is a technology that has reached mass adoption faster than the personal computer or the internet. Generative AI hit nearly 53% population-level adoption within three years. Leading AI companies are reaching meaningful revenue scale in a fraction of the time it took previous technology generations, and global corporate investment more than doubled in 2025. Organizational adoption rose to 88%, and early estimates suggest the consumer value of generative AI has grown substantially within a year.”
“The data does not point in a single direction,” they added. “It reveals a field that is scaling faster than the systems around it can adapt.”
The long. comprehensive 423-page report is organized into nine chapters: Research & Development, Technical Performance, Responsible AI, Economy, Science, Medicine, Education, Policy and Governance, and Public Opinion.
- Capability is accelerating — not plateauing. AI performance continues to improve rapidly across a wide range of domains. Industry now produces over 90% of notable frontier models, many of which meet or exceed human baselines in advanced scientific reasoning, multimodal tasks, and competition mathematics. … At the same time, adoption is widespread: 88% of organizations and four in five university students now use generative AI.
- The global AI race is tightening. The performance gap between U.S. and Chinese models has effectively closed, with leadership changing hands multiple times since early 2025. While the U.S. still leads in top-tier models and investment, China dominates in publications, citations, and industrial deployment.
- Infrastructure strength — and fragility — are both increasing. The U.S. leads in AI data center capacity by a wide margin, but the global hardware supply chain remains highly concentrated. A single company, TSMC, produces most advanced AI chips, highlighting a critical dependency despite recent efforts to diversify production geographically.
- AI progress remains uneven — the “jagged frontier.” AI systems can achieve extraordinary results in some domains while failing at relatively simple tasks in others. This unevenness is evident in both capabilities (e.g., solving Olympiad-level math vs. reading clocks) and in the still-limited reliability of AI agents in real-world tasks.
- Governance and safety are falling behind. Responsible AI efforts are not keeping pace with technical advances. Reporting on safety benchmarks remains inconsistent, and documented AI incidents rose sharply in 2025. Progress in one dimension, such as safety, can sometimes come at the expense of another, such as accuracy.
- Investment remains strong, but talent flows are shifting. The U.S. continues to dominate private AI investment and startup formation. However, its ability to attract global AI talent is declining significantly, raising longer-term competitiveness concerns.
- Adoption is rapid — but uneven across countries. Generative AI adoption has reached 53% globally within three years, faster than previous major technologies. However, adoption varies widely, with some smaller economies leading and the U.S. ranking lower than expected. Consumers are already capturing substantial economic value from AI tools.
- Productivity gains are real — but unevenly distributed. AI is delivering measurable productivity improvements in areas like customer support and software development. However, gains are uneven, and some entry-level roles — particularly in software — are already showing signs of decline, even as demand for experienced workers increases.
- Education systems are struggling to keep up. AI use among students is widespread, but formal education systems lag in policies, curriculum, and teacher preparation. At the same time, AI skills are being acquired outside traditional institutions at an accelerating pace.
- Public and expert views remain sharply divided. There is a significant gap between expert optimism and public skepticism regarding AI’s impact. Trust in institutions to manage AI is also fragmented globally, complicating governance efforts.
In last year’s blog on the 2025 AI Index Report, I discussed the report’s chapter on AI-driven advances in Science and Medicine — areas in which I have a long-standing interest given my past involvement with applications of supercomputing in scientific research. This year’s report includes separate chapters on Science and Medicine, underscoring AI’s growing impact in both fields. While I continue to follow these developments closely, I want to focus here on AI’s impact on education — an area I’ve been paying increasing attention to over the past year.
“Demand for AI education is growing across every level, but the systems needed to deliver it are still catching up,” notes the report. “Computer science enrollment in post-secondary institutions is declining even as AI-related majors gain popularity. Students at both the university and K–12 levels are using AI tools in large numbers, yet access to AI-specific coursework and teacher training remains limited.”
“Four out of five U.S. high school and college students now use AI for schoolwork, but school policies have not kept pace. Only half of middle and high schools have AI policies, and just 6% of teachers say those policies are clear. Students most commonly use generative AI for research, essay editing, and brainstorming.”
At the same time, some researchers have raised concerns about the increasing reliance on AI tools by students. In a 2025 Substack essay, “Will AI create a generation of non-thinkers?,” Bharat Chandar, a postdoctoral researcher at the Stanford’s Digital Economy Lab, questioned whether students may fail to develop critical thinking skills if they rely too heavily on AI.
“Recall staring blankly at a page, struggling to come up with an answer to an essay prompt,” he wrote. “Formulating and articulating a thought might have taken hours… Working through writer’s block to craft a compelling argument was a painstaking rite of passage.”
“Do students today have this experience?” Chandar asks. “If AI can write our essays, what happens to human thought?”
His concerns echo a broader debate highlighted in a recent article in The Economist, “How AI will divide the best from the rest,” which asked whether AI could widen social divides. Early optimism suggested that AI might level the playing field by extending expert capabilities to less-skilled workers. For example, in a 2024 article, MIT economist David Autor argued that AI could help rebuild the middle class by helping us extend the value of human expertise to a larger set of workers.
More recent evidence, however, suggests a more complex reality. In cognitively demanding tasks such as research and management, high performers appear better positioned to benefit from AI. Effectively using AI requires judgment and expertise — meaning that, rather than narrowing inequalities, AI may widen them, as has often been the case with past technological revolutions.
Meanwhile, higher education is actively debating how best to incorporate AI into teaching. As noted in a recent New York Times article, “AI Is Coming to Class,” some instructors remain strongly opposed to AI tools, while others are experimenting with ways to integrate them into writing and learning.
Conclusion
Taken together, the findings of the 2026 AI Index Report point to a common theme: AI is advancing not just as a technology, but as a systemic force. Its capabilities, adoption, and economic impact are scaling rapidly, while the institutions that shape its use — education, governance, labor markets — are struggling to keep pace.
This tension is especially visible in education. Students are already deeply engaged with AI, often in ways that outstrip the ability of schools and universities to guide its use. The question is no longer whether AI will be part of learning, but how learning itself must evolve in response.
More broadly, the report suggests that the next phase of the AI era will be defined less by breakthroughs in models and more by how effectively societies adapt to them. The challenge is not only to build more powerful systems, but to ensure that the human systems around them evolve just as quickly — and just as thoughtfully.
