Software-intensive systems are generally quite flexible, - able to evolve and adapt to changing product and market requirements. However, their very flexibility makes it difficult to adequately anticipate and test all the interactions between the various components of the system. Even if all the components are highly reliable, problems can still occur if a rare set of interactions arise that compromise the overall behavior and safety of the system.
Things are even tougher with people-centric sociotechnical systems, which not only have to deal with potential software problems, but with the even more complex issues involved in human behaviors and interactions. And, at the bleeding-edges of complexity are data-driven AI systems, which are generally based on statistical analysis and machine learning methods.
What is it that makes these systems so intrinsically complex? Why aren’t they simpler? What purpose does this complexity serve? Several years ago, I came across a reasonable answer to these seemingly Socratic questions in a very interesting paper - Complexity and Robustness - by professors Jean Carlson and John Doyle from UC Santa Barbara and Cal Tech, respectively. Their paper draws on evolutionary biology to help explain the complexity of man-made systems.
According to Carlson and Doyle, you can find very simple biological organisms in nature, and you can design very simple objects. The key ingredient you give up is not their basic functionality, but their robustness - that is, the ability to survive, for biological organisms, or to perform well, for engineered objects - under lots of different conditions, including the failures of individual components or dealing with new, unanticipated events. Robustness implies the ability to adapt and keep going in spite of a rapidly changing environment.
There is a continuing struggle between complexity and robustness in both evolution and human design. A kind of survival imperative, whether in biology or engineering, requires that simple, fragile systems become more robust. But the mechanisms to increase robustness will in turn make the system considerably more complex. Furthermore, that additional complexity brings with it its own unanticipated failure modes, which are corrected over time with additional robust mechanisms, which then further add to the complexity of the system, and on and on and on. This balancing act between complexity and robustness is never done.
For example, as part of their evolution, biological organisms - from plants to mammals - have developed highly sophisticated control and regulatory mechanisms designed to help them survive in dramatically fluctuating environments. In humans these control mechanisms form the autonomic nervous system, which includes involuntary functions like breathing, digestion, heart rate and perspiration that must be carefully monitored and regulated to keep us alive.
These control mechanisms generally bring along their own problems. One of the most important protection mechanisms, for example, is the immune system, which guards against disease. But the immune system is subject to its own serious diseases, such as immunodeficiencies when its activity is abnormally slow, and autoimmunities, which are caused by a hyperactive immune system.
Something similar is happening with our increasingly sophisticated smart machines, - e.g., autonomous cars, collaborative robots, cloud-connected appliances, - and smart systems, - e.g., cities, healthcare, finance. These smart products and systems tend to be software-intensive, people-centric, and data-driven. Their complexity goes way up the smarter we want them to be. By their very nature, they will sense, respond and adapt to a changing environment, so testing feels more like sending a teen-ager into the world than classic IT testing. Given that we want them to be able to handle unanticipated situations, they must have a fair degree of flexibility and autonomy, but it’s hard to be sure that the system (or the teen-ager) will always do what we want them to do.
One of the best discussions of the challenges of bleeding-edge AI systems is an article on the Benefits and Risks of Artificial Intelligence co-authored by Tom Ditteriech and Eric Horvitz, - current and former president respectively of the Association for the Advancement of AI. They list three major risks that we must pay close attention to:
Complexity of AI software. “We are all familiar with errors in ordinary software. For example, apps on our smartphones sometimes crash. Major software projects, such as HealthCare.Gov, are sometimes riddled with bugs. Moving beyond nuisances and delays, some software errors have been linked to extremely costly outcomes and deaths. The study of the verification of the behavior of software systems is challenging and critical, and much progress has been made. However, the growing complexity of AI systems and their enlistment in high-stakes roles, such as controlling automobiles, surgical robots, and weapons systems, means that we must redouble our efforts in software quality.”
Cyberattacks. “Criminals and adversaries are continually attacking our computers with viruses and other forms of malware. AI algorithms are no different from other software in terms of their vulnerability to cyberattack. But because AI algorithms are being asked to make high-stakes decisions, such as driving cars and controlling robots, the impact of successful cyberattacks on AI systems could be much more devastating than attacks in the past… Before we put AI algorithms in control of high-stakes decisions, we must be much more confident that these systems can survive large scale cyberattacks.”
The Sorcerer’s Apprentice. “Suppose we tell a self-driving car to ‘get us to the airport as quickly as possible!’ Would the autonomous driving system put the pedal to the metal and drive at 300 mph while running over pedestrians?… Other fears center on the prospect of out-of-control superintelligences that threaten the survival of humanity. All of these examples refer to cases where humans have failed to correctly instruct the AI algorithm in how it should behave… In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability - and responsibility - of working with people to obtain feedback and guidance. They must know when to stop and ‘ask for directions’ - and always be open for feedback.”
To help us better understand and manage these critical challenges, Horwitz spearheaded the recent launch of AI100 at Stanford University, “a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.”
A related initiative, the Future of Life Institute (FLI) was founded a year ago as a volunteer-run research organization with the mission to help humanity steer a positive course when embracing new technologies, including the potential risks from the development of human-level artificial intelligence. Recently, FLI published an Open Letter, - which it invites readers to endorse and sign, - recommending “expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.” FLI has developed a Research Priorities report with specific recommendations for maximizing the societal benefit of AI, including:
- “Verification (Did I build the system right?): how to prove that a system satisfies certain desired formal properties;”
- “Validity (Did I build the right system?): how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences;”
- “Security: how to prevent intentional manipulation by unauthorized parties;” and
- “Control (OK, I built the system wrong, can I fix it?): how to enable meaningful human control over an AI system after it begins to operate.”
In addition, the report recommends research on: maximizing the economic benefits of AI while mitigating adverse effects such as increased inequality and unemployment; dealing with the legal and ethical questions that may arise with intelligent, autonomous systems; and helping to ensure that any future AI system that might significantly surpass human intelligence will have a robust and beneficial impact on society.
“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” the Open Letter points. “Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”
Comments