Earlier this year, MIT professor emeritus Rodney Brooks gave the closing keynote at the 2024 MIT Sloan CIO Symposium. Professor Brooks was director of the MIT AI Lab from 1997 to 2003, and was the founding director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) from 2003 until 2007. A robotics entrepreneur, he’s founded of a number of companies, including iRobot, Rethink Robotics, and Robust.AI.
Given that AI was the overall theme of this year’s MIT Symposium, Brooks focused his keynote on “What Works and Doesn't Work with AI.” He reminded the audience that “AI has been on the verge of changing everything” ever since the field was founded in the 1950s, and explained why so many AI predictions have turned out so wrong.
I wrote about his excellent presentation in “Artificial Intelligence: Realistic Expectations vs. Irrational Exuberance,” and concluded the blog with what Brooks called “My Three Laws of Artificial Intelligence”:
- When an AI system performs a task, human observers immediately estimate its general competence in areas that seem related. Usually that estimate is wildly overinflated.
- Most successful AI deployments have a human somewhere in the loop (perhaps the person they are helping) and their intelligence smooths the edges.
- Without carefully boxing in how an AI system is deployed there is always a long tail of special cases that take decades to discover and fix. Paradoxically all those fixes are AI-complete themselves.
At the end of July, Brooks posted two new essays in his website. One was “Rodney Brooks’ Three Laws of Artificial Intelligence,” where he elaborated on his Three Laws of AI. I’d like to now discuss his second essay, “Rodney Brooks’ Three Laws of Robotics.”
“Here are some of the things I’ve learned about robotics after working in the field for almost five decades,” he wrote. “In honor of Isaac Asimov and Arthur C. Clarke, my two boyhood go-to science fiction writers, I’m calling them my three laws of robotics”:
- The visual appearance of a robot makes a promise about what it can do and how smart it is. It needs to deliver or slightly over deliver on that promise or it will not be accepted.
- When robots and people coexist in the same spaces, the robots must not take away from people’s agency, particularly when the robots are failing, as inevitably they will at times.
- Technologies for robots need 10+ years of steady improvement beyond lab demos of the target tasks to mature to low cost and to have their limitations characterized well enough that they can deliver 99.9% of the time. Every 10 more years gets another 9 in reliability.
He started the essay by noting that his three laws of robotics “are about real robots deployed in the real world. The laws are not about research demonstrations. They are about robots in everyday life,” and then proceeded to explain each of these laws in more detail.
The promise given by appearance
The visual appearance of a robot is very important. It tells the buyer or user what to expect. “The point of this first law of robotics is to warn against making a robot appear more than it actually is.” When the robot cannot do all the things its physical appearance suggests, customers will be disappointed, — “And disappointed customers are not going to be an advocate for your product/robot, nor be repeat buyers.”
Brooks illustrated this first law with two concrete products from iRobot, the robot company that he co-founded in 1990. Its first product was the Roomba, a series of autonomous robotic vacuum cleaners introduced in 2002. The Roomba is essentially a disk, — i.e., a flat, thin, round object, — designed to clean the floor area of a home. The disk contains a set of sensors to help navigate the floor area of a home, detect the presence of obstacles and avoid steep drops like a step or stairs. It cannot go up and down stairs or even a single step. It clearly cannot clean windows. It can only clean a flat floor and that’s all its looks promise.
He then discussed the PackBot, a series of remotely operable military robots introduced by iRobot in 2000. The PackBot was deployed in the US wars in Iraq and Afghanistan. It was used to help search through the debris of the World Trade Center after 9/11/2001, and in the Fukushima nuclear plant accident in 2011. Unlike the Roomba, the PackBot has tracked wheels like those of a miniature tank so it can move over rough terrain including rocks, drops, and steps.
“When the Fukushima disaster happened, in 2011, PackBots were able to operate in the reactor buildings that had been smashed and wrecked by the tsunami, open door handles under remote control, drive up rubble covered staircases and get their cameras pointed at analog pressure and temperature gauges so that workers trying to safely secure the nuclear plant had some data about what was happening in highly radioactive areas of the plant.”
Preserving People’s Agency
“The worst thing for its acceptance by people that a robot can do in the workplace is to make their jobs or lives harder, by not letting them do what they need to do,” wrote Brooks. He illustrated this second law with two examples.
First is the use of robots in hospitals to collect dirty sheets and dishes from the patient floors to make the lives of the nursing staff easier. But, if they don’t get out of the way when there is an emergency, as often happens in hospitals, they may end up blocking some life saving work, like a gurney being wheeled down a corridor with a critical ill patient or emergency workers waiting to enter an elevator.
The robots in the second example are autonomous vehicles driving around cities like San Francisco or Austin with no human driver. Sometimes the vehicles end up blocking intersections until some remote operator monitoring multiple vehicles gets it out of the way. There have also been cases where the autonomous vehicles wander to the scene of a fire, get confused and just stop, getting in the way of the critical work being done by the firefighters.
“The autonomous vehicles took agency from people going about their regular business on the streets, but worse took away agency from firefighters whose role is to protect other humans. Deployed robots that do not respect people and what they need to do will not get respect from people and the robots will end up undeployed.”
Robust Robots that Work Everytime
“For a customer to be happy with a robot it must appear to work every time it tries a task, otherwise it will frustrate the user to the point that they will question whether it makes their life better,” said Brooks.“Making robots that work reliably in the real world is hard. In fact, making anything that works physically in the real world, and is reliable, is very hard.”
Most software programs have been designed and tested to operate in a well understood environment. While unanticipated interactions or software bugs may cause the programs to fail, designers and programmers go out of their way to anticipate and minimize such failures.
Robots, on the other hand, interact with objects and people in the real physical world. Robots have to deal with the unpredictability of the physical world, such as the position of objects relative to their expectation, and with the highly variable behavior of their human partners. “Getting software that adequately adapts to the uncertain changes in the world in that particular instance and that particular instant of time is where the real challenge arises in robotics.”
“I have rarely seen a new technology that is less than ten years out from a lab demo make it in to a deployed robot,” wrote Books in conclusion. “It takes time to see how well the method works, and to characterize it well enough that it is unlikely to fail in a deployed robot that is working by itself in the real world. Even then there will be failures, and it takes many more years of shaking out the problem areas and building it into the robot product in a defensive way so that the failure does not happen again.”
Comments