Two years ago, Stanford University launched the One Hundred Year Study of AI (AI100), “to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.” One of its key missions is to convene a Study Panel of experts every five years to assess the then current state of the field, as well as to explore both the technical advances and societal challenges over the next 10 to 15 years.
The first such Study Panel recently published Artificial Intelligence and Life in 2030, a report that examined the likely impact of AI on a typical North American city by the year 2030. The report is organized into three main sections: Section I described the key research trends influencing AI’s future; Section II examined the key economic sectors that are most likely to be impacted by AI; and Section III looked into issues surrounding AI and public policy. Over the past several weeks I’ve discussed the first and second sections. I now would like to turn my attention to Section III: Prospects and Recommendations for AI Public Policy.
“Throughout history, humans have both shaped and adapted to new technologies. This report anticipates that advances in AI technologies will be developed and fielded gradually - not in sudden, unexpected jumps in the techniques themselves - and will build on what exists today, making this adaptation easier… The measure of success for AI applications is the value they create for human lives. Going forward, the ease with which people use and adapt to AI applications will likewise largely determine their success.”
The report then adds some very important warnings. “Conversely, since AI applications are susceptible to errors and failures, a mark of their success will be how users perceive and tolerate their shortcomings. As AI becomes increasingly embedded in daily lives and used for more critical tasks, system mistakes may lead to backlash from users and negatively affect their trust.”
Since then, the field has been making steady progress. Now that AI seems to be reaching a tipping point of market acceptance, it must be extra careful to avoid another round of hype and unfulfilled promises. The report lists some of the potential mistakes, along with advice on how to best prevent them.
- Hype vs reality. Carefully explain what we know and what we don’t know. Set realistic expectations. For example, while self-driving cars may be safer than human-driven cars, there will be serious accidents which will definitely attract considerable attention.
- AI vs humans. Explain AI’s strength and limitations. Humans and AI have complementary capabilities. Like other tools, AI’s primary objective should be to enhance human capabilities.
- Widening of inequalities. AI technologies will significantly improve the abilities of those with access to them, but if unfairly distributed across society, AI can widen inequalities of opportunity.
- Deepening social biases. AI applications and data could reflect the biases of their designers, raising issues of fairness. AI-based decision making tools should strive to be objective and significantly reduce human biases.
- Privacy concerns. AI-enabled surveillance is becoming widespread. Great care must be taken to avoid serious privacy violations.
To help address these individual and societal concerns, the Study Panel offered three policy recommendations:
Define a path toward accruing technical expertise in AI at all levels of government. “Without an understanding of how AI systems interact with human behavior and societal values, officials will be poorly positioned to evaluate the impact of AI on programmatic objectives.”
Remove the perceived and actual impediments to research on the fairness, security, privacy, and social impacts of AI systems. “Some interpretations of federal laws… are ambiguous regarding whether and how proprietary AI systems may be reverse engineered and evaluated by academics, journalists, and other researchers. Such research is critical if AI systems with physical and other material consequences are to be properly vetted and held accountable.”
Increase public and private funding for interdisciplinary studies of the societal impacts of AI. “As a society, we are underinvesting resources in research on the societal implications of AI technologies. Private and public dollars should be directed toward interdisciplinary teams capable of analyzing AI from multiple angles. Research questions range from basic research into intelligence to methods to assess and affect the safety, privacy, fairness, and other impacts of AI.”
The private sector has started to address some of these concerns. Earlier this year, five major companies, - Amazon, Google, Facebook, IBM, and Microsoft, - announced the creation of a non-profit organization, - the Partnership on AI. The founding members are in discussion with other companies, research labs and professional organizations, about joining the Partnership.
“The objective of the Partnership on AI is to address opportunities and challenges with AI technologies to benefit people and society,” said the announcement press release. “Together, the organization’s members will conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology. It does not intend to lobby government or other policymaking bodies.”
Like other technologies, AI has the potential to be used for good or nefarious purposes,…” wrote the AI100 report in conclusion. “A vigorous and informed debate about how to best steer AI in ways that enrich our lives and our society, while encouraging creativity in the field, is an urgent and vital need… In the coming years, as the public encounters new AI applications in domains such as transportation and healthcare, they must be introduced in ways that build trust and understanding, and respect human and civil rights. While encouraging innovation, policies and processes should address ethical, privacy, and security implications, and should work to ensure that the benefits of AI technologies will be spread broadly and fairly.”