In his legendary 1965 paper, “The Future of Integrated Circuits,” Intel co-founder Gordon Moore first made the empirical observation that the number of components in integrated circuits had been doubling every year since their invention in 1958. Moore predicted that the trend would continue for at least ten years, a prediction he subsequently changed to a doubling every two years. The semi-log graphs associated with Moore’s Law have since become a visual metaphor for the revolution unleashed by the exponential improvements of just about all digital components, from processing speeds and storage capacity to networking bandwidth and pixels.
Moore’s Law has had quite a run, but like all things based on exponential improvements, it eventually slowed down and flattened out in the 2010s. What will now drive computing in the post-Moore’s-Law era?
Software has been a key driver of computing since the 1980s. In a 2011 essay, “Software is Eating the World,” technologist and investor Marc Andreessen wrote that “we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy. More and more major businesses and industries are being run on software and delivered as online services — from movies to agriculture to national defense.” Entrepreneurial companies all over the world have been disrupting established industries, with innovative software-based solutions.
In 2017, Nvidia CEO and co-founder Jensen Huang boldly updated Andreessen’s essay by predicting that “Software Is Eating the World but AI is Going to Eat Software.” In 2024, he further predicted that “the future of coding as a career might already be dead in the water with the imminent prevalence of AI.
“But even as software eats the world and AI gobbles up software, what disrupter appears ready to make a meal of AI?,” asked MIT researchers Michael Schrage and David Kiron in “Philosophy Eats AI,” a recent article in MITSloan Management Review. “The answer is hiding in plain sight. It challenges business and technology leaders alike to rethink their investment in and relationship with artificial intelligence. There is no escaping this disrupter; it infiltrates the training sets and neural nets of every large language model (LLM) worldwide.”
“Philosophy is eating AI: As a discipline, data set, and sensibility, philosophy increasingly determines how digital technologies reason, predict, create, generate, and innovate,” the authors add. “Generating sustainable business value with AI demands critical thinking about the disparate philosophies determining AI development, training, deployment, and use.” Hardware and software have long been technical disciplines, taught in electrical engineering and computer sciences departments. But AI, with its emphasis on human qualities like intelligence, knowledge, language and reasoning feels like a very different discipline.
“The critical enterprise challenge is whether leaders will possess the self-awareness and rigor to use philosophy as a resource for creating value with AI or default to tacit, unarticulated philosophical principles for their AI deployments. Either way — for better and worse — philosophy eats AI.”
“For strategy-conscious executives, that metaphor needs to be top of mind,” note Schrage and Kiron. Their article brings to mind a blog I wrote in 2010, “Business Management and Holistic, Critical Thinking,” based on a NY Times article about the efforts of Roger Martin, — who at the time was dean of the Rotman School of Management at the University of Toronto, — to transform business education.
The NYT article noted that about a decade earlier, Professor Martin began advocating “what was then a radical idea in business education: that students needed to learn how to think critically and creatively every bit as much as they needed to learn finance or accounting. More specifically, they needed to learn how to approach problems from many perspectives and to combine various approaches to find innovative solutions.”
“Learning how to think critically, how to imaginatively frame questions and consider multiple perspectives, has historically been associated with a liberal arts education, not a business school curriculum, so this change represents something of a tectonic shift for business school leaders. Mr. Martin even describes his goal as a kind of liberal arts M.B.A. ‘The liberal arts desire,’ he says, is to produce ‘holistic thinkers who think broadly and make these important moral decisions. I have the same goal.’”
Martin’s ideas may have been radical in the emerging Internet world of the 1990s and 2000s. But even before the 2008 global financial upheaval, “business executives operating in a fast-changing, global market were beginning to realize the value of managers who could think more nimbly across multiple frameworks, cultures and disciplines. The financial crisis underscored those concerns — at business schools and in the business world itself.”
As a result, business schools began moving into territory “more traditionally associated with the liberal arts: multidisciplinary approaches, an understanding of global and historical context and perspectives, a greater focus on leadership and social responsibility and, yes, learning how to think critically.”
Wikipedia defines philosophy as “a systematic study of general and fundamental questions concerning topics like existence, reason, knowledge, value, mind, and language. … Major branches of philosophy are epistemology, ethics, logic, and metaphysics. Epistemology studies what knowledge is and how to acquire it. Ethics investigates moral principles and what constitutes right conduct. Logic is the study of correct reasoning and explores how good arguments can be distinguished from bad ones. Metaphysics examines the most general features of reality, existence, objects, and properties.”
“Philosophical perspectives on what AI models should achieve (teleology), what counts as knowledge (epistemology), and how AI represents reality (ontology) also shape value creation,” wrote Shrage and Kiron. “Without thoughtful and rigorous cultivation of philosophical insight, organizations will fail to reap superior returns and competitive advantage from their generative and predictive AI investments.”
“Explicitly drawing on philosophical perspectives is hardly new or novel for AI, they added. “Breakthroughs in computer science and AI have consistently emerged from deep philosophical thinking about the nature of computation, intelligence, language, and mind. Computer scientist Alan Turing’s fundamental insight about computers, for example, came from philosophical questions about computability and intelligence — the Turing test itself is a philosophical thought experiment.”
In a related Q&A, “Is Philosophy the Next LLM Training Frontier?, the authors answered questions about their article. Let me discuss a few of these questions.
Why do you see philosophy as the ultimate aim for AI success? “Generative AI’s rise — and the power and potential of LLMs — means philosophy simultaneously becomes a capability, a sensibility, a dataset and an enabler for training and for gaining greater value from AI investments. Philosophy today is a mission-critical, strategic differentiator for organizations that want to maximize their return on AI.”
Give us a simple and accessible framework for thinking about this. “Critical thinking and philosophical rigor will get you better outcomes from both generative and predictive AI models. When designers prompt the model to think better, its responses prompt humans to think better. That’s a virtuous cycle that decision makers need to embrace.”
The majority of AI developers — as well as business leaders — don’t think about philosophy on a daily basis. How will philosophy infiltrate AI software design going forward? “Remember, we’re far beyond ‘just’ coding and development — we’re training models to learn and learn how to learn. What learning principles matter most? What do we want our models to ‘understand’ about customer or employee loyalty? What kinds of collaborators and partners do we want them to become for us and with us? Barely five years ago, these questions were hypothetical and rhetorical. Today, they define research agendas by organizations that really want to get the best impact from their AI investments.”
What is the role of humans in this brave new world? “Philosophy’s ultimate AI impact might not be in making these intelligences more ethical or better aligned with current human values, but in transcending our current perceived limitations and inspiring new frontiers of understanding and capability.”
“We argue that AI systems rise or fall to the level of their philosophical training, not their technical capabilities,” wrote Schrage and Kiron in conclusion. “When organizations embed sophisticated philosophical frameworks into AI training, they restructure and realign computational architectures into systems that:
- Generate strategic insights rather than tactical responses;
- Engage meaningfully with decision makers instead of simply answering queries; and
- Create measurable value by understanding and pursuing organizational purpose.
“These should rightly be seen as strategic imperatives, not academic exercises or thought experiments. Those who ignore this philosophical verity will create powerful but ultimately limited tools; those embracing it will cultivate AI partners capable of advancing their strategic mission. Ignoring philosophy or treating it as an afterthought risks creating misaligned systems — pattern matchers without purpose, computers that generate the wrong answers faster.”
Comments