“Any sufficiently advanced technology is indistinguishable from magic,” is one of Arthur C. Clarke’s, - author of 2001: A Space Odyssey, - most memorable quotes. I still remember a Monday in the summer of 1996, when around 4 am I was in my Tokyo hotel room doing e-mail on my laptop while listening over the Internet to a live baseball game being played in New York, where it was Sunday afternoon. Today this would be no big deal, but at the time it felt like one of those magical moments Clarke had in mind, perhaps the moment when I truly understood the transformative power of the rapidly growing Internet.
Artificial Intelligence may now be going through such an Internet moment. “Artificial intelligence is suddenly everywhere. It’s still what the experts call soft A.I., but it is proliferating like mad.” So starts an excellent Vanity Fair article I recently wrote about. “Everything that we formerly electrified we will now cognitize,” observed another great article. “Experts envision automation and intelligent digital agents permeating vast areas of our work and digital lives by 2025, but they are divided on whether these advances will displace more jobs than they create,” was the overriding finding of a report published by the Pew Research Center this past August.
For the past several years, Edge.org has been asking a rather general, philosophical annual question to a diverse group of thinkers. Last year the question was What Scientific Idea is Ready for Retirement?, and two years ago it was What Should We Be Worried About? For 2015, Edge.org chose What Do You Think about Machines that Think? as its annual question.
John Brockman, publisher and editor of Edge.org, explained why he chose AI as the theme for this year’s question: “In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI) - whether computers can really think, refer, be conscious, and so on - have led to new conversations about how we should deal with the forms that many argue actually are implemented. These AIs, if they achieve Superintelligence (Nick Bostrom), could pose existential risks that lead to Our Final Hour (Martin Rees). And Stephen Hawking recently made international headlines when he noted ‘The development of full artificial intelligence could spell the end of the human race.’”
“But wait! Should we also ask what machines that think, or, AIs, might be thinking about?… Do they have feelings?… Will we, and the AIs, include each other within our respective circles of empathy?… Is AI becoming increasingly real? Are we now in a new era of the AIs?”
People have long worried about the impact of technology on society, whether discussing railroads, electricity, and cars in the Industrial Age, or the Internet and smartphones that are now permeating just about all aspect of our lives. But, AI may be in a class by itself. Like no other technology, AI forces us to explore the boundaries between machines and humans. To write about Machines that Think with rigor, writes Brockman, “it’s time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, The Borg.”
I read a number of the 191 responses to his question. They were generally quite interesting. Some were really worried about the future of the human race. Cambridge emeritus professor Martin Rees wrote in Organic Intelligence Has No Long-Term Future: “… by any definition of thinking, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the cerebrations of AI.” Oxford professor Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, wrote: “Machines are currently very bad at thinking (except in certain narrow domains). They’ll probably one day get better at it than we are (just as machines are already much stronger and faster than any biological creature).”
Others were not so sure that machines are destined to surpass human intelligence. UC Berkeley psychologist Alison Gopnik wrote in Can Machines Ever Be As Smart As Three-Year-Olds?: “One of the fascinating things about the search for AI is that it’s been so hard to predict which parts would be easy or hard… And, it turns out to be much easier to simulate the reasoning of a highly trained adult expert than to mimic the ordinary learning of every baby… at least for now, we have almost no idea at all how the sort of creativity we see in children is possible. Until we do, the largest and most powerful computers will still be no match for the smallest and weakest humans.”
Daniel Dennett, Tufts philosophy professor and co-director of its Center for Cognitive Studies wrote: “The Singularity - the fateful moment when AI surpasses its creators in intelligence and takes over the world - is a meme worth pondering. It has the earmarks of an urban legend: a certain scientific plausibility (‘Well, in principle I guess it's possible!’) coupled with a deliciously shudder-inducing punch line (‘We'd be ruled by robots!’)… Add a few illustrious converts - Elon Musk, Stephen Hawking, and David Chalmers, among others - and how can we not take it seriously?” But, he concludes that “The real danger… is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.”
Most of us who’ve been closely involved with computers think of them as both fascinating and pedantic. They keep surprising us with their near-magical accomplishments, while often frustrating us with their lack of simple common sense. I don’t spend much time worrying if they’ll one day be smarter than we are, and thus design, program, and debug themselves. We only wish. . .
Rodney Brooks, - MIT emeritus professor and founder and chairman of Rethink Robotics, - writes in Mistaking Performance For Competence Misleads Estimates Of AI’s 21st Century Promise And Danger: “People are getting confused and generalizing from performance to competence and grossly overestimating the real capabilities of machines today and in the next few decades… The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded… people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.”
Kevin Kelly, author and co-founder of Wired, concluded in Call Them Artificial Aliens: “What are humans for? I believe our first answer will be: humans are for inventing new kinds of intelligences that biology could not evolve. Our job is to make machines that think different - to create alien intelligences. Call them artificial aliens.”
A similar sentiment was expressed by author Nicholas Carr: “Machines that think think like machines. That fact may disappoint those who look forward, with dread or longing, to a robot uprising. For most of us, it is reassuring. Our thinking machines aren’t about to leap beyond us intellectually, much less turn us into their servants or pets. They're going to continue to do the bidding of their human programmers.”
“Much of the power of artificial intelligence stems from its very mindlessness. Immune to the vagaries and biases that attend conscious thought, computers can perform their lightning-quick calculations without distraction or fatigue, doubt or emotion. The coldness of their thinking complements the heat of our own.”
What does this all mean? Should we panic? Should we do nothing? Are there serious concerns about the future of AI that require our attention, not unlike the concerns inherent in the evolution of other highly complex technologies like the Internet, genomics and nanotechnology.
The best articulation I’ve seen about the potential risks of AI technologies is an essay co-authored by Eric Horvitz, director of Microsoft’s Redmond Research Lab and former president of the Association for the Advancement of AI (AAAI), and Tom Ditteriech, professor at Oregon State University and current president of AAAI. In their essay, they list three major risks that we should pay close attention to.
Complexity of AI software. “We are all familiar with errors in ordinary software. For example, apps on our smartphones sometimes crash. Major software projects, such as HealthCare.Gov, are sometimes riddled with bugs. Moving beyond nuisances and delays, some software errors have been linked to extremely costly outcomes and deaths. The study of the verification of the behavior of software systems is challenging and critical, and much progress has been made. However, the growing complexity of AI systems and their enlistment in high-stakes roles, such as controlling automobiles, surgical robots, and weapons systems, means that we must redouble our efforts in software quality.”
Cyberattacks. “Criminals and adversaries are continually attacking our computers with viruses and other forms of malware. AI algorithms are no different from other software in terms of their vulnerability to cyberattack. But because AI algorithms are being asked to make high-stakes decisions, such as driving cars and controlling robots, the impact of successful cyberattacks on AI systems could be much more devastating than attacks in the past… Before we put AI algorithms in control of high-stakes decisions, we must be much more confident that these systems can survive large scale cyberattacks.”
The Sorcerer’s Apprentice. “Suppose we tell a self-driving car to ‘get us to the airport as quickly as possible!’ Would the autonomous driving system put the pedal to the metal and drive at 300 mph while running over pedestrians? Troubling scenarios of this form have appeared recently in the press. Other fears center on the prospect of out-of-control superintelligences that threaten the survival of humanity. All of these examples refer to cases where humans have failed to correctly instruct the AI algorithm in how it should behave…”
“In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability - and responsibility - of working with people to obtain feedback and guidance. They must know when to stop and ‘ask for directions’ - and always be open for feedback. Some of the most exciting opportunities ahead for AI bring together the complementary talents of people and computing systems.”
My overriding conclusion is that there are few areas of study as exciting, important and challenging as intelligence in all its various manifestations: human, machine, and assorted human-machine combinations.
Comments