The 2025 MIT Sloan CIO Symposium took place on Tuesday, May 20 at the Royal Sonesta hotel overlooking the Charles River, a short walk from the MIT campus. Not surprisingly, AI was the dominant theme in this year’s Symposium, with a number of keynotes and panels on the topic.
Over the past few years, the Symposium’s closing keynote has been given by a prominent MIT faculty member. In 2023, economics professor David Autor talked about “The Impact of AI on Jobs and the Economy,” where he discussed the potential of AI to reshape the nature of human expertise. In 2024, MIT professor emeritus of robotics Rodney Brooks talked about “What Works and Doesn’t Work with AI,” where he explained what he called the “Seven Deadly Sins of Predicting the Future of AI.”
This year, Daron Acemoglu, Institute professor of economics and 2024 Nobel laureate, talked about “The Long Term Evolution of AI Economics.”
Automation vs. Human Complementarity
Professor Acemoglu opened his keynote by telling us that we face two very important choices when considering the development and deployment of AI, — choices will have major economic and possibly cultural implications for the future.
Automation technologies. One choice is to develop AI tools aimed at automating and replacing human labor. This could reduce the price of many products and services, and might possibly increase their quality if the tools work particularly well. It would result in lower costs for companies, but “the bottom line for workers would not be that great.”
Human-complementary technologies. Now imagine other AI tools designed to enhance workers’ capabilities and productivity. “Suppose that you give a mobile AI powered device that can take pictures and instantly, link them to previous use cases when it comes to complex new electrical equipment. And this AI tool is given to electricians that, as a result of having access to this tool, can engage in much more sophisticated tasks, perhaps some new tasks related to the electricity grid or other activities that they themselves, given their level of expertise and experience, would not have been able to do otherwise.”
“The future will have both sorts of technologies,” Acemoglu added. The balance between the two will create different types of winners and losers and “is going to determine the broader impacts of AI.”
Economic Bias
From a pure economic point of view, the bias toward automation is understandable. For managers under pressure to reduce costs in a highly competitive environment, automation offers a quick and predictable return. “There’s nothing wrong with automation,” said Acemoglu. “If we had not automated some of the manual tasks that people used to perform in the 18th century, we would live in a very different world.” Throughout history, many companies have gone down this path. But, you likely never heard of them “because no company goes into the history books because they have cut costs by 10% or 20%.”
Companies that appear in history books are those that have developed major human-complementary technologies. For example, the Ford Motor Company completely revolutionized the entire economy by leveraging the growing deployment of electricity in the early 20th century to transform manufacturing with the development of the assembly line. Nobody had previously thought that you could combine a series of electric machines, each with its own small electric motor, to come up with a completely decentralized factory design, which together with much better trained workers, could be used to build different kinds of cars at a fraction of the previous cost, “and create a new market that nobody dreamed of. That was the combination of technology and human resources in a way that was unparalleled for its time.” As a result, the Ford assembly line has long been in history books.”
Like electricity, AI is a general purpose technology (GPT). GPTs are the defining technologies of their times and can radically change the economic environment. They have great potential, but realizing that potential requires large complementary investments, including business process redesign, innovative new products and business models, the re-skilling of the workforce, and a fundamental rethinking of the organization of production, as was the case with the assembly line. As a result, there’s generally been a significant time lag between the development of a major transformative technology, like electricity and now AI, and its widespread deployment across companies and industries which then leads to long-term economic and productivity growth.
“But the question is whether there are now enough incentives in the tech sector to actually develop the human complimentary machines,” said Acemoglu, because they may be less profitable and less attractive in the short term than the simpler automation technologies. “I think many managers will be very willing to find ways of using technology that are going to make their most valuable human resource, skilled employees, more productive. Even some of them may be imaginative enough to find ways of making their less skilled workers more productive.”
But in the end, “the economics creates a small or moderate bias towards too much automation and not enough of these human complementary technologies that can create these new services, new tasks and new products.”
Adding to the bias toward automation are the “fever dreams” of developing artificial general intelligence (AGI), a term whose definition is somewhat unclear. “Ask three experts in Silicon Valley, they will give you four different definitions, but one thing is clear. I think artificial general intelligence is really about replacing humans,” because, if AI becomes as capable as human experts across many different occupations, it's going to automate many tasks and replace them.
The Historical Roots of AI
The key question we’ve long been asking is “when are we going to get to AGI?,” said Acemoglu. “How far until we get there? And if we don’t get there, will we still try to automate a lot of things with the prospect that perhaps someday we may get to AGI?
Even more powerful than the economic bias towards too much automation is the kind of motivation that AI researchers have long had towards artificial general intelligence. To better understand that vision, “we need to go back to the origin story of computer science and AI.”
In his 1950 seminal paper, Computing Machinery and Intelligence, Alan Turing proposed a concept that’s become known as the Turing test, — which he originally called the imitation game, — to test a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. A machine is said to have passed the Turing test if a human evaluator sitting at a second remote keyboard couldn’t tell whether he was having a natural language conversation with a human or a machine.
“If the human brain, even if it works on different substrates, is just a Turing machine itself, just a computer that engages in computation, then we'll have to be, an inferior version of the universal Turing machine,” noted Acemoglu. “So if our advances can take us towards the universal Turing machine, well, we're on our way to artificial general intelligence, perhaps even artificial superintelligence.”
He also mentioned another major milestone in the AI origin story: the 1956 Dartmouth Conference in which the field was christened. According to the AI Myths website, the first use of the term artificial intelligence appeared in a proposal for a workshop at Dartmouth College in the summer of 1956 that was attended by the pioneers of the emerging AI academic discipline. As the proposal noted: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve the kinds of problems now reserved for humans, and improve themselves.” Needless to say, the wildly ambitious summer study was unable to accomplish its aim, nor has it been achieved over the past seven decades.
This AGI-centric view of AI has been reinforced in the wider society by the many science fiction books and movies about super-intelligent machines. After all, said Acemoglu, it’s more exciting to make movies about AI taking over civilization than movies about AI augmenting the expertise of electricians.
Acemoglu concluded his keynote by discussing an alternative ecosystem of leading edge ideas that may not be as well known today when we’re all talking about AGI, but has long played a major role in the evolution of computers. This tradition is centered on Man-Computer Symbiosis, that is, on the development of human complementary technologies to make computers significantly more usable by focusing on the users and their needs and requirements. Acemoglu briefly mentioned three of the key pioneers of this technical tradition:
- Norbert Wiener, who in the 1940s originated the field of cybernetics, the science of communications between humans and machines which has been quite influential in computer sciences and other engineering disciplines;
- Douglas Engelbart, who’s best known for his pioneering work on human-computer interaction and the development of technologies like the computer mouse, graphical user interfaces, and hypertext that have made computers much more usable and have helped humans become more productive; and
- J. C. R. Licklider, whom Acemoglu called “the grandfather of the Internet” for his leading role in helping to launch ARPANET, the direct predecessor of the Internet, and his vision of the role of information technologies in expanding the human mind, enabling us to perform tasks that we couldn't even dream of.
A Strategy for a Better AI Future
In the age of AI, will we build tools that displace people or ones that expand human potential? Will we chase narrow efficiency or pursue shared prosperity? According to Acemoglu, the answer to these important questions will depend on three key factors:
- Company owners and managers must recognize the long-term value of using AI to empower workers and foster innovation — not just reduce labor costs;
- Investors and entrepreneurs need to pursue products and platforms that enhance human capability, even if the path toward monetization is more complex or less immediate; and,
- The broader public must embrace the idea that a path that makes workers more productive and innovative, is better for society than the alternative.
“I remain convinced … that what people like Wiener, Engelbart, and Licklider were trying to do and sometimes achieved remains even more feasible today. But it's not going to happen automatically. And I think one of the very important things about the future of AI is to learn from the past of digital technologies, but also dream big about how companies can revolutionize themselves in terms of offering new services, new products, new tasks, and going into history books.”
Y (thx)! ....COLLECTIVE INTELLIGENCE (better choices)?
learning patterns (language)... building trust(community)
story-by-story..WEAL(th) (caring::regeneration)?
Posted by: stan curtis | June 12, 2025 at 10:15 AM