“The era of Artificial Intelligence is here, and boy are people freaking out,” wrote technologist, entrepreneur and VC investor Marc Andreessen in the opening sentence of his article, Why AI Will Save the World posted in his personal website. The article argues that not only will AI not destroy the world, but that it might in fact help save it.
While a computer sciences student at the University of Illinois, Andreessen was the leader of the team that developed the Mosaic browser in the early 1990s, the first easy-to-use graphical web browser that could be ported to a wide range of computers. Soon after its release in 1993, millions started using Mosaic to access the fast growing World Wide Web. After getting his degree in 1993, Andreessen moved to Silicon Valley and became a co-founder of Netscape, one of the first companies focused on bringing the fast growing World Wide Web to the commercial world.
The Mosaic browser and Netscape played a major role in the explosive growth of the internet in the 1990s. To give us a sense of the world-changing transformations we might expect from generative AI, a recent essay in The Economist cited the browser, along with the printing press as two major historical analogues to generative AI. Given his accomplishments, Andreessen is clearly someone whose opinions we should pay attention to when discussing the potential impact of generative AI, large language models, chatbots, and related AI technologies.
His article is organized in a number of sections. The first two sections give a detailed explanation of why he believes that AI can make everything we care about better, and why in contrast to his positive view, so many people are panicking. He then systematically discusses and refutes each of the major AI risks that are causing the fear and panic. He adds that the real risk is not aggressively pursuing AI in the US and the West in general, and proposes a simple plan on what’s to be done.
Let me summarize a few of the key points in each section.
Why AI Can Make Everything We Care About Better
“The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better,” wrote Andreessen. “Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality.”
“What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence –- and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars –- much, much better from here.”
Andreessen list a number of concrete examples of how AI might be able to augment human intelligence, including:
- every child will have a highly knowledgeable and helpful AI tutor that will help them maximize their potential;
- an AI assistant/coach/mentor/trainer/advisor/therapist will help maximize every person’s life’s opportunities and outcomes;
- every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement;
- scientific and health care breakthroughs will dramatically expand, as AI helps us further decode the laws of nature; and
- productivity growth throughout the economy will accelerate dramatically, driving the creation of new industries and jobs.
So, Why The Panic?
“What explains this divergence in potential outcomes from near utopia to horrifying dystopia?,” asks Andreessen. Why are public conversations about AI frequently shot through with hysterical fear and paranoia?
Ever since the advent of industrialization over 200 years ago, there’ve been periodic fears about the impact of technology-based automation on jobs. In the 1810s, for example, the so-called Luddites smashed the new machines that were threatening their textile jobs. But each time those fears arose in the past, technology advances ended up creating more jobs than they destroyed.
Automation anxieties have understandably accelerated in recent years, as our increasingly smart machines are now being applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans. The concerns surrounding AI’s long term impact on jobs may well be in a class by themselves. But, the scale of labor force shifts that AI might unleash isn’t without precedent, likely similar to the scale of of shifts out of agriculture in the early 20th century, and out of manufacturing in the past few decades.
“Historically, every new technology that matters, from electric lighting to automobiles to radio to the Internet, has sparked a moral panic – a social contagion that convinces people the new technology is going to destroy the world, or society, or both,” wrote Andreessen. “But a moral panic is by its very nature irrational – it takes what may be a legitimate concern and inflates it into a level of hysteria that ironically makes it harder to confront actually serious concerns.”
“This moral panic is already being used as a motivating force by a variety of actors to demand policy action – new AI restrictions, regulations, and laws. These actors, who are making extremely dramatic public statements about the dangers of AI – feeding on and further inflaming moral panic – all present themselves as selfless champions of the public good.”
AI Risks
Andreessen then discusses five often mentioned AI risks: AI will variously kill us all; it will ruin our society; it will take all our jobs; it will cause crippling inequality; and it will enable bad people to do awful things. Let me briefly summarize his comments about AI posing an existential risk because it might decide to literally wipe out humanity.
“The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture,” wrote Andreessen. This has long been a theme of countless novels and films, from Mary Shelley’s 1818 novel Frankenstein to James Cameron’s Terminator films. Presumably, the evolutionary purpose of these myths is to motivate us to seriously consider the potential risks of new powerful technologies.
But why would AI decide to literally kill humanity? “AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave. In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive.” In the end, AI is a machine — not likely to decide to kill you any more than your car deciding to kill you by crashing at high speed, or your toaster deciding to catch fire and burn your house down with your family inside.
The Risk Of Not Pursuing AI With Maximum Force And Speed
“The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not,” said Andreessen. The reason is that China has a vastly different vision for AI than the US and the West. China views AI as a mechanism for authoritarian population control, and doesn’t intend to limit their strategy to China but instead to proliferate it all across the world.
Rather than allowing ungrounded panics around AI that will limit its development and applications, we should seek to achieve AI technology superiority and we should drive AI into our economy and society in order to maximize its gains for economic productivity and human potential.
What Is To Be Done?
In conclusion, Andreessen proposes a simple plan:
- Allow big AI companies to build AI as fast as they can, but don’t allow them to use claims of AI risk to establish a government-protected cartel that insulates them from market competition.
- Allow startup AI companies to build AI as fast and aggressively as they can. They should not be granted government protection or assistance, but simply be allowed to compete.
- There should be no regulatory barriers to open source AI. Open source AI should be allowed to proliferate and compete with both large AI companies and startups. The widespread availability of open source AI will ensure that AI is available to everyone who can benefit from it.
- AI should be embraced as a powerful tool for solving problems, both to maximize society’s defensive capabilities against the risks of bad people doing bad things with AI, and to address major societal problems like climate, disease, and malnutrition.
- And, finally, we should use the full power of our private sector, our scientific establishment, and our governments to help achieve the US and Western non-authoritarian vision for AI.
Comments