On January 1, MIT professor emeritus Rodney Brooks published his 2025 Predictions Scorecard. A member of MIT’s faculty since 1984, Brooks was director of the MIT AI Lab from 1997 to 2003, and the founding director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) from 2003 until 2007. He’s also been a robotics entrepreneur, having started a number of companies, including iRobot, Rethink Robotics, and Robust.AI.
Brooks has posted a Predictions Scorecard every year since 2018, where he makes predictions about future milestones in three technology areas that he closely follows: robotics, AI, and machines learning; self driving cars; and human space travel. He also reviews the actual progress in each of these areas to see how his past predictions have held up, and promises “to review them at the start of the year every year until 2050 (right after my 95th birthday), thirty two years in total” in order to hold himself accountable for those predictions. “How right or wrong was I?”
In his 2023 Predictions Scorecard, Brooks explained that he makes his predictions because he’s seen “an immense amount of hype about these three topics, and the general press and public drawing conclusions about all sorts of things they feared (e.g., truck driving jobs about to disappear, all manual labor of humans about to disappear) or desired (e.g., safe roads about to come into existence, a safe haven for humans on Mars about to start developing) being imminent. My predictions, with dates attached to them, were meant to slow down those expectations, and inject some reality into what I saw as irrational exuberance.”
Let me summarize what he wrote in his 2025 Predictions Scorecard in the area I most closely follow:
What happened in Robotics, AI, and Machine Learning this past year?
“The level of hype about AI, Machine Learning and Robotics completely distorts people’s understanding of reality.” There’s definitely been significant progress in AI over the last decade. “There are new tools and they are being applied widely in science and technology, and are changing the way we think about ourselves, and how to make further progress. That being said, we are not on the verge of replacing and eliminating humans in either white collar jobs or blue collar jobs. Their tasks may shift in both styles of jobs, but the jobs are not going away.”
“We are not on the verge of a revolution in medicine and the role of human doctors. We are not on the verge of the elimination of coding as a job. We are not on the verge of replacing humans with humanoid robots to do jobs that involve physical interactions in the world. We are not on the verge of replacing human automobile and truck drivers world wide. We are not on the verge of replacing scientists with AI programs.”
Breathless predictions about AI are not new. They’ve been happening ever since AI became an academic discipline in the 1950s. “The only difference this time is that these expectations have leaked out into the world at large.”
Why have so many AI predictions turned out so wrong?
The answer is what Brooks called the Seven Deadly Sins of Predicting the Future of AI in a 2017 essay. Four of those seven sins are particularly relevant to today’s hyped up AI atmosphere.
- Performance vs Competence. “We use cues from how a person performs any particular task to estimate how well they might perform some different task. We are able to generalize from observing performance at one task to guess at competence over a much bigger set of tasks. These estimators that we have all inherited or learned do not generalize well to other creatures or machines. We are not good at guessing which smart things other species might be able to do, and we are not good at guessing what an AI system can do when we have seen it do a few tasks in a limited domain. We get it wrong all the time.”
- Indistinguishable from Magic. Science fiction writer Arthur C. Clarke famously said: Any sufficiently advanced technology is indistinguishable from magic. If a new technology is significantly more advanced than the technologies we have and understand today, then we don’t know its limitations. In principle anything is possible, so they may as well be magic.
- Exponentialism. For the past sixty years the technology world had the most phenomenal growth in the history of mankind: Moore’s Law. The semi-log graphs associated with Moore’s Law became a visual metaphor for the technology revolution unleashed by the exponential improvements of digital components, from processing speeds to storage capacity. Moore’s Law has had quite a run, but like all things based on exponential improvements, it eventually slowed down and flattened out. “The sin of exponentialism is to argue that some other process is going to follow a Moore’s-like law when it is unwarranted to so argue.”
- Speed of Deployment.“A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products. Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.”
These four so-called sins are responsible for the incredible hype surrounding both Large Language Models (LLMs) and robots that are capable of learning how to do things in the physical world.
There’s a lot of magical thinking about LLMs amazing facility with language.
“Miraculously LLMs seem to be able to infer a representation of some sort,” wrote Brooks. “So they are able to translate between human languages, and when you ask them just about anything they produce text in the language that you asked in, and that text often seems entirely reasonable and informative.
“I used the word ‘miraculously’ as we do not really understand why they are able to do what they do,” he added. “We, of course, know that the architecture for them is built around noticing correlations in vast amounts of text … It is a surprise that they work as well as they do, and produce coherent sounding language on just about any topic.”
So now, our human nature makes us commit the first two sins mentioned above. First, if a human was giving such impressive responses to our questions, we would assume that the person is really smart. And, since we don’t really understand how LLMs work, “we start thinking it is magic, and that there is no real limit to what it is extracting from all that data.”
Some researchers are trying to show that LLMs have actually achieved human-like reasoning abilities instead of being essentially stochastic parrots that rely on statistical correlations and predictions to pick the likely next words. A few predict that it won’t be long before they match and surpass human cognitive capabilities, — i.e., AGI and ASI, — and will thus be able to diagnose diseases like a doctor, teach students like a human teacher, and program as well as a human programmer. “It is magic after all.”
But, while many of the LLMs’ outputs are truly impressive, they also cannot be trusted due to so called hallucinations, — erroneous fabricated information generated by the LLM; or confabulations, — a plausible but false narrative based on the LLM’s misunderstanding of the information used in its training.
There’s little question that AI systems will keep improving over time; the problem is that it’s not clear what is meant by over time: months, years, or decades. Time and again researchers have underestimated the difficulty of successfully deploying AI innovations in the marketplace, and this time is no different.
“Artificial Intelligence has the distinction of having been the shiny new thing and being overestimated again and again, in the 1960’s, in the 1980’s, and I believe again now,” wrote Brooks in his 2017 essay. “Not all technologies get underestimated in the long term, but that is most likely the case for AI. The question is how long is the long term.”
The hype around humanoid robots is pretty dumb
“The other thing that has gotten overhyped in 2024 is humanoids robots,” Brooks wrote in his 2025 Predictions Scorecard. “The rationale for humanoid robots being a thing is a product of the four sins above and I think way less rooted in reality than the hype about LLMs.”
He explained why overhyping humanoid robots is pretty dumb by referencing an essay he published in July of 2024 — “Rodney Brooks’ Three Laws of Robotics”:
- The visual appearance of a robot makes a promise about what it can do and how smart it is. It needs to deliver or slightly over deliver on that promise or it will not be accepted.
- When robots and people coexist in the same spaces, the robots must not take away from people’s agency, particularly when the robots are failing, as inevitably they will at times.
- Technologies for robots need 10+ years of steady improvement beyond lab demos of the target tasks to mature to low cost and to have their limitations characterized well enough that they can deliver 99.9% of the time. Every 10 more years gets another 9 in reliability.
The first law of robotics is “what’s sucking people into believing that humanoid robots have a big future. It looks like a human, so its performance will be like a human, so it will be competent like a human. It’s the performance/competence sin without even waiting for the performance part!”
The second explains why humanoid robots will eventually fail and billions of dollars in investments will disappear. The robots won’t be able to do all the things investors and CEOs promise at acceptable levels. “They have hardly even got to the lab demonstration phase.”
And finally, the third law further explains that “For real work, robots need to operate with four, five, or six nines. We are a long way from that. The zeitgeist is that we will simply teach the robots to do stuff and then they will be able to do it.” But we don’t know if and/or when this is going to work. “In order for it to work you have to both collect the right sort of data and then learn the right things from that data. It is not at all clear to me that we know the answers to make either of those things true. I think it will be an active place for lots of good research for many years to come.
Comments