After many years of promise and hype, AI seems to be finally reaching a tipping point of market acceptance. “Artificial intelligence is suddenly everywhere… it is proliferating like mad.” So starts a Vanity Fair article published around two years ago by author and radio host Kurt Andersen. And, this past June, a panel of global experts convened by the World Economic Forum (WEF) named Artificial Intelligence, - Open AI Ecosystems in particular - as one of its Top Ten Emerging Technologies for 2016 because of its potential to fundamentally change the way markets, business and governments work.
AI is now being applied to activities that not long ago were viewed as the exclusive domain of humans. “We’re now accustomed to having conversations with computers: to refill a prescription, make a cable-TV-service appointment, cancel an airline reservation - or, when driving, to silently obey the instructions of the voice from the G.P.S,” wrote Andersen. The WEF report noted that “over the past several years, several pieces of emerging technology have linked together in ways that make it easier to build far more powerful, human-like digital assistants.”
What will life be like in such an AI-based society? What impact is it likely to have on jobs, companies and industries? How might it change our everyday lives?
These questions were addressed in Artificial Intelligence and Life in 2030, a report that was recently published by Stanford University’s One Hundred Year Study of AI (AI100). AI100 was launched in December, 2014 “to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.” The core activity of AI100 is to convene a Study Panel every five years to assess the then current state of the field, review AI’s progress in the years preceding the report, and explore the potential advances that lie ahead as well the technical and societal challenges and opportunities these advances might raise.
“Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly.”
The report’s first section addresses a very important question: How do researchers and practitioners define Artificial Intelligence?
From its inception about sixty years ago, there has never been a precisely, universally accepted definition of AI. Rather, the field has been guided by a rough sense of direction, such as this one by Stanford professor Nils Nilsson in The Quest for Artificial Intelligence: “Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.
Such a characterization of AI depends of what we mean for a machine to function appropriately and with foresight. It spans a very wide spectrum, - as it should. Is a simple calculator intelligent because it does math much faster than the human brain? Where in the spectrum do we place thermostats, cruise-control in cars, navigation applications that give us detail directions, speech recognition, and chess and Go-playing apps?
Over the past six decades, the frontier of what we’re willing to call AI has kept moving forward. AI suffers from what’s become known as the AI effect: AI is whatever hasn’t been done yet, and as soon as an AI problem is successfully solved, the problem is no longer considered part of AI. “The same pattern will continue in the future,” notes the report. “ AI does not deliver a life-changing product as a bolt from the blue. Rather, AI technologies continue to get better in a continual, incremental way.”
One of the key ways of assessing progress in AI is to compare it to human intelligence. Any activity that computers are now able to perform that was once the exclusive domain of humans could be counted as an AI advance. And, one of the best ways of comparing AI to humans is to pit them against each other in a competitive game.
Chess was one of the earliest AI challenges. Many AI leaders were then convinced that it was just a matter of time before AI would consistently beat humans at chess. They were trying to do so by somehow programming the machines to play chess, even though to this day we don’t really understand how chess champions think, let alone how to translate their thought patterns into a set of instructions that would enable a machine to play expert chess. All these ambitious AI approaches met with disappointment and were abandoned in the 1980s, when after years of unfulfilled promises a so called AI winter of reduced interest and funding set in that nearly killed the field.
AI was reborn in the 1990s. Instead of trying to program computers to act intelligently, the field embraced a statistical, brute force approach based on analyzing vast amounts of information with powerful computers and sophisticated algorithms. AI researchers discovered that such an information-based approach produced something akin to intelligence or knowledge. Moreover, unlike the earlier programming-based projects, the statistical approaches scaled very nicely. The more information you had, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results.
Deep Blue, IBM’s chess playing supercomputer, demonstrated the power of such a statistical, brute force approach by defeating then reigning chess champion Gary Kasparov in a celebrated match in May, 1997. “Curiously, no sooner had AI caught up with its elusive target than Deep Blue was portrayed as a collection of brute force methods that wasn’t real intelligence… Was Deep Blue intelligent or not? Once again, the frontier had moved.” Now, the best chess programs consistently beat the strongest human players, and even smartphone-based apps play a strong game of chess.
As human-computer chess matches no longer attract much interest, the AI frontier has moved to games considerably more complex than chess. In 2011, Watson, - IBM’s question-answering system, - won the Jeopardy! Challenge against the two best human Jeopardy! players, demonstrating that computers could now extract meaning from the unstructured knowledge embodied in books, articles, newspapers, web sites, social media, and anything written in natural language. And earlier this year, Google’s AlphaGo claimed victory against Lee Sedol, - one of the world’s top Go players, - in a best-of-five match, winning four games and losing only one. In the game of Go, there are more possible board positions than there are particles in the universe. A Go-playing system cannot simply rely on computational brute force. AlphaGo relies instead on deep learning algorithms, modeled partly on the way the human brain works.
Given the broad, changing scope of the field, what then is Artificial Intelligence? The AI100 Study Panel offers a circular, operational answer: AI is defined by what AI researchers do. The report then lists the key AI research trends, that is, the hot areas AI researchers are pursuing. These include:
- Large-scale machine learning. Machine learning gives computers the ability to learn by ingesting huge amounts of data instead of being explicitly programmed. Machine learning has been propelled dramatically forward by the huge amounts of data we now have access to and by cloud computing computational and storage resources. “A major focus of current efforts is to scale existing algorithms to work with extremely large data sets.”
- Deep learning. Deep learning takes machine learning to the next level, using deep graphs with multiple processing layers, which enable advanced visual application like object recognition and video labeling, as well as significantly improved audio, speech and natural language processing.
- Reinforcement learning. “Whereas traditional machine learning has mostly focused on pattern mining, reinforcement learning shifts the focus to decision making, and is a technology that will help AI to advance more deeply into the realm of learning about and executing actions in the real world.”
- Robotics. “Current efforts consider how to train a robot to interact with the world around it in generalizable and predictable ways.… Advances in reliable machine perception, including computer vision, force, and tactile perception, much of which will be driven by machine learning, will continue to be key enablers to advancing the capabilities of robotics.”
- Computer Vision. “For the first time, computers are able to perform some (narrowly defined) visual classification tasks better than people. Much current research is focused on automatic image and video captioning.”
- Natural Language Processing. Natural Language Processing “is quickly becoming a commodity for mainstream languages with large data sets… Research is now shifting towards developing refined and capable systems that are able to interact with people through dialog, not just react to stylized requests.”
“Over the next fifteen years, the Study Panel expects an increasing focus on developing systems that are human-aware, meaning that they specifically model, and are specifically designed for, the characteristics of the people with whom they are meant to interact. There is a lot of interest in trying to find new, creative ways to develop interactive and scalable ways to teach robots. Also, IoT-type systems - devices and the cloud - are becoming increasingly popular, as is thinking about social and economic dimensions of AI. In the coming years, new perception/object recognition capabilities and robotic platforms that are human-safe will grow, as will data-driven products and their markets.”
"The AI100 Study Panel offers a circular, operational answer: AI is defined by what AI researchers do." In any academic field, if you drill down far enough, you will come to exactly this answer. What do mathematicians study? Mathematics. What's mathematics? What mathematicians study. Who are mathematicians? Those who other mathematicians recognize as doing mathematics.
Sure, you can find all kinds of secondary characteristics. Someone whose work involves the analysis of the plays of Ibsen is almost certainly not a mathematician - at least when doing that work.
Mathematics, in some broad sense, involves the study of the formal consequences of sets of axioms. But are logicians mathematicians? How about string theorists? You can start endless, pointless debates on these questions.
There's AI as a field of study - defined by its practitioners, and very, very heavily influenced by its funders; there's AI as defined by marketers - anything that will make a product look cutting-edge, spiffy, new, and worth extra money; and there's AI as defined by the "gut feel" of most people - anything that makes a computer act, in some recognizable way and to some reasonable degree, as people expect other people to act. Advances in the field defined by the first of these have lead to an explosion of the second (which will last until the next buzzword takes over) and a slow but steady growth in the third. Exactly how far this will go is impossible to guess. Deep learning, statistical techniques, and "smart" brute force are proving to be much more capable than anyone could reasonably have expected. (Most experts familiar with both Go and deep learning didn't expect that a program would beat a strong professional player for at least a decade more.) Machine translation using these techniques works way better than anyone had expected. And yet ... the results, useful are they are, have a long way to go, and it's not yet clear whether the current crop of techniques can get there.
Predicting the effects of broader employment of techniques in ways that we already know work can be difficult, but it at least starts with a reasonable base of "what we already know". Predicting either what else (that hasn't been shown to work yet) will become accessible to known techniques; or what other techniques might emerge; is speculation on top of speculation. "If we can land a man on the moon, why can't we ...."
-- Jerry
Posted by: Jerry Leichter | November 06, 2016 at 11:15 AM