“During this period of astonishing technical progress and public engagement on artificial intelligence, the world has been grappling with how to get AI right — and for good reason,” wrote James Manyika, in “Getting AI Right: A 2050 Thought Experiment.” Manyika is Senior Vice President of Research, Technology and Society at Google and Chairman Emeritus of the McKinsey Global Institute. His essay is part of The Digitalist Papers, a series of 12 essays on the future of the age of AI.
“[T]he world has been grappling with how to get AI right—and for good reason,” noted Manyika. “AI is a foundational and general-purpose technology with the potential to benefit people and society in a range of ways”:
- assist people with everyday tasks, help them access and benefit from the world’s information and knowledge, and pursue their most ambitious, productive, and creative endeavors;
- contribute to economic progress by enabling entrepreneurs, powering small and large businesses, and fueling innovation and productivity;
- accelerate scientific advances in fields ranging from medicine to materials, physics, climate sciences and more; and
- help make progress on many of humanity’s most pressing challenges and opportunities, from food security to health and well-being.
“At the same time there are concerns about AI’s development, deployment, and use, including: robustness, accuracy, bias, and privacy risks; risks from misapplication and misuse; potentially complex impacts on society from jobs to education and democracy, and the possibility of unintended or unforeseen consequences; and challenges of alignment with human preferences and human flourishing as AI becomes more capable. Such concerns, if unaddressed, could create information hazards, lead to safety and security risks, and cause harm.”
To help him explore how to get AI right, Manyika compiled a Working List of Ten Hard Problems in AI that must taken into account in order to realize the societal benefits of AI while addressing its risks and challenges. Based on this list, he then devised a thought experiment focused on a simple but important question:
“It’s the year 2050. AI has turned out to be hugely beneficial to society and generally acknowledged as such. What happened?”
“This question, or thought experiment, aims to elicit the most worthwhile possibilities we will have achieved, the most beneficial opportunities realized, the hard problems solved, the risks averted, misuses avoided, and unintended consequences mitigated if we are to acknowledge a positive outcome in 2050. It is a way of asking what we need to get right if AI is to be a net benefit to society in a not-too-distant future.”
For the Digitalist essay, Manyika condensed his original list of AI problems from ten to five. For each of the five, his essay includes a description of the problem, the actions we must undertake to address it, and some of the key markers we should track to ensure that we’re making progress. Let me summarize each of the five problems.
1. Develop a more capable, safer, and trustworthy AI
Develop AI systems that can help us achieve our most ambitious and beneficial uses. Such an AI must also help us address the shortcomings of current systems, mitigate concerns, and build public trust so that AI does not create new harms or worsen existing societal issues.
Key markers of progress:
- AI has improved paradigms of learning, real-world understanding, meta-level reasoning, and human-directed self-improvement.
- AI complements and extends human capabilities in novel and useful ways.
- AI is capable of developing novel scientific concepts, theories, and experiments, and of generating genuinely new insights and discoveries.
- AI has achieved greater robustness, interpretability, privacy preservation, factuality, safety, and security.
- AI offers mechanisms for mitigating bias, information hazards, and other societal concerns.
- AI training and use is orders of magnitude less resource intensive.
- There has been a step change in funding for research and infrastructure in academia and beyond on a the uses, safety, and implications of AI.
2. Leverage AI to address humanity’s greatest challenges and deliver positive benefits for all
Beyond AI’s technical capabilities, focus on AI’s potential to benefit humanity by assisting people, powering economic prosperity, advancing science, addressing societal challenges, and improving lives everywhere.
Key markers of progress
- AI has enabled step-change improvements in universal access to information, knowledge, and services critical to well-being.
- AI has contributed to gains in productivity for individuals, businesses and other organizations, and for a wide range of industry sectors.
- Worker-assistive AI is a bigger effect than worker-displacing AI; labor policies are in place for impacted workers, e.g., re-skilling and wage support.
- Material progress has been achieved in addressing inequalities (within and between countries) that could hinder broad participation in the development and use of AI.
- A vibrant ecosystem of researchers, entrepreneurs, companies and countries are involved in AI’s development, use, and benefits.
- AI has increased the rate and number of breakthroughs that benefit society such as cures of major diseases and climate change mitigation.
3. Responsible development, deployment, use and governance of AI
The responsible development of AI encompasses the conduct and dissemination of research; access to leading edge technologies, products, and services; appropriate uses of AI by individuals, companies, industries, and governments; cooperation and coordination between countries, companies and other key actors; ethical, economic, geopolitical and national security considerations; and human involvement and oversight.
Key markers of progress
- National (and regional) AI governance and regulations have evolved to enable innovation and beneficial use of AI and to address the risks in its development, deployment, and use.
- Global adoption of internationally harmonized AI standards, risk frameworks, and industry practices, have been achieved.
- Robust and broadly deployed mechanisms have been developed for detecting, reporting, and mitigating misuse of AI for misinformation, cybercrime, and critical infrastructures; and for chemical, biological, radiological, and nuclear risks.
- Widely adopted mechanisms have been developed that encourage open innovation and wide participation in AI research, development, and use, while limiting risks of bad actors taking advantage of such openness.
- International governance has been achieved for coordinating broadly agreed global principles, goals and functions.
- Beneficial uses of AI have emerged that fall outside of commercial pursuit where the support of governments and global institutions have enabled their development, deployment, and use.
4. What it means to be human in the age of AI? — Co-evolution of societal systems
An increasingly capable AI will require the adaptation of many aspects of our lives, our societal systems, social contracts, civic participation, education, and governance institutions. In addition, we must figure out what it means to be human in the emerging age of AI, and how we should think about work, social relations, achievement, purpose, and more. This will require us to reflect on what it means to be intelligent, creative, or cognitively human when many of these characteristics can be imitated or even, — at some point in the future, — done better by AI.
Key markers of progress
- Sectors, institutions, and systems that provide services (e.g., education, healthcare, public services) are making appropriate use of AI’s capabilities.
- Institutions that shape societal arrangements and social contracts, such as legal systems and governments, have incorporated the beneficial capabilities of AI and instituted mechanisms to mitigate its risks.
- Effective mechanisms have emerged for individuals and communities with different beliefs about AIboundaries to coexist and thrive.
- Language and mental models for what it means to be human and flourish alongside highly capable AI have emerged and are being robustly debated.
- One or more celebrated genre of AI-enabled art, or other creative endeavor, has emerged with many practitioners and new museum-like institutions.
- Interfaces to help humans think in dimensions they were not capable of before emerge that allow unprecedented levels of cognitive capabilities that neither humans nor AI could achieve on their own.
5. Alignment with increasingly powerful and capable AI systems
Humans have achieved alignment and compatibility with an increasingly powerful AI, which will become increasingly important and consequential as AI becomes more powerful and with capabilities, — arguably including intelligence, — beyond those of humans.
Key markers of progress
- Robust technical methods for aligning individual AI systems with individual or collective human preferences.
- International and societal agreements to base AI alignment on universal norms.
- Practical methods have been developed and ongoing research established to better understand interactions among communities of intelligent agents.
- Ongoing and robust research, assessment, and testing for likely or emerging AI capabilities in advance of achieving them.
- Mechanisms for contending with uncontrolled gains, corruption, deception, and manipulation by capable AI systems.
- Robust monitoring, incidence reporting, response mechanisms, and pre-agreed protocols for the emergence of superintelligent capabilities.
“The problems presented here may well be an idiosyncratic view of what we must get right if AI is to have been a net positive for humanity when we look back in 2050,” wrote James Manyika in conclusion. “Readers will undoubtedly have their own views of what we must get right and what will constitute progress. There should be vigorous discussion, debate, and iteration to get to better and, ideally, shared lists of what matters most to get right.”
“While such lists will likely evolve as AI advances, its uses evolve, and society’s experience with it grows, the work to get AI right must not wait. It must be taken on now with a focus on not just what could go wrong, but also, and importantly, what could go right and how we shape it in the face of some unknowns. It is work that must involve everyone — researchers, developers and users of AI, the private and public sector, academia, civil society, and governments — so we should get on with it.”
Comments