People have long argued about the future impact of technology. But, as AI is now seemingly everywhere, the concerns surrounding its long term impact may well be in a class by themselves. Like no other technology, AI forces us to explore the boundaries between machines and humans. What will life be like in such an AI future?
Not surprisingly, considerable speculation surrounds this question. At one end we find books and articles exploring AI’s impact on jobs and the economy. Will AI turn out like other major innovations, e.g., steam power, electricity, cars, - highly disruptive in the near term, but ultimately beneficial to society? Or, as our smart machines are being increasingly applied to cognitive activities, will we see more radical economic and societal transformations? We don’t really know.
These concerns are not new. In a 1930 essay, for example, English economist John Maynard Keynes warned about the coming technological unemployment, a new societal disease whereby automation would outrun our ability to create new jobs.
Then we have the more speculative predictions, that in the not too distant future, a sentient, superintelligent AI might be able to far surpass human intelligence as well as experience human-like feelings. Such an AI, we are warned, might even pose an “existential risk” that “could spell the end of the human race.”
A number of experts view these superintelligence predictions as yet another round of the periodic AI hype that in the 1980s led to the so-called AI winter. Interest in AI declined until the field was reborn in the 1990s by embracing an engineering-based data-intensive, analytics paradigm. To help us understand what an AI future might be like, Stanford University recently launched AI100, “a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.”
Superintelligence is a truly fascinating subject, the stuff of science fiction novels and movies. But, whether you believe in it or not, how can we best frame a serious discussion of the subject? I believe that in the end, it comes down to which of two major forces will prevail over time - exponential growth or complexity brake.
In their 2011 bestseller, Race Against the Machine, MIT’s Erik Brynjolfsson and Andy McAfee argue that the breakthrough advances AI has achieved in just the past few years, - e.g. Watson, Siri, Google driverless cars, - are the result of Moore’s Law and exponential growth. They illustrate the power of exponential growth using an ancient Indian story about the creation of chess.
According to the story, upon being shown the game of chess, the emperor was so pleased that he told its inventor to name his own reward. The inventor proceeded to request what seemed like a modest reward. He asked for an amount of rice computed as follows: one grain of rice for the first square of the chess board, two grains for the second one, four for the third and so on, doubling the amount each time up to the 64th square.
After 32 squares, the inventor had received 232 or about 4 billion grains of rice, roughly one large field’s worth weighing about 100,000 kilograms - a large, but not unreasonable reward. However, the second half of the chess board is different due the power of exponential growth. After 64 squares, the total amount of rice, 264, would have made a heap bigger than Mount Everest and would have been roughly 1000 times the world’s total production of rice in 2010.
Digital technologies have recently entered the second half of the chessboard. If we assume 1958 as the starting year and the standard 18 months for the doubling of Moore’s Law, 32 doublings would then take us to 2006, - “into the phase where exponential growth yields jaw-dropping results.” What happens then?
In his 2005 book, The Singularity is Near: When Humans Transcend Biology, - author and inventor Ray Kurzweil predicted that exponential advances in technology, lead to what he calls The Law of Accelerating Returns. As a result, around 2045 we will reach the Singularity, at which time “machine intelligence will be infinitely more powerful than all human intelligence combined.”
Many dismiss Kurzweil’s arguments out of hand, as well as those of other so-called Singularitarians. The most thoughtful counter-argument I’ve seen is a 2011 MIT Technology Review article, The Singularity Isn’t Near, by Paul Allen. Allen, a co-founder of Microsoft, counts brain science and AI among his diverse interests. He’s the founder of the Allen Institute for Brain Science and the Allen Institute for Artificial Intelligence.
Allen believes that while it’s possible that the singularity will one day occur, “we don’t think it is near. In fact, we think it will be a very long time coming.” Kurzweil’s Law of Accelerating Returns derives much of its exponential shape “from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer’s hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs.”
Creating the kind of advanced software needed to achieve singularity-level intelligence requires scientific progress in the fundamentals of cognition, - way beyond where we are today. We either have to reverse-engineer the human brain so it can serve as an architectural guide, or we need to develop a new kind of artificial superintelligence.
The human brain may well be the most complex object in the universe. Understanding how it works “means not just knowing the physical structure of the brain, but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in human consciousness and original thought. Getting this kind of comprehensive understanding of the brain is not impossible. If the singularity is going to occur on anything like Kurzweil’s timeline, though, then we absolutely require a massive acceleration of our scientific progress in understanding every facet of the human brain.”
The problem in achieving such an understanding is what Allen calls the complexity brake. “As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be….”
“Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings. We believe that one day this steady increase in complexity will end - the brain is, after all, a finite set of neurons and operates according to physical principles. But for the foreseeable future, it is the complexity brake and arrival of powerful new theories, rather than the Law of Accelerating Returns, that will govern the pace of scientific progress required to achieve the singularity.”
An alternative approach is to develop a new kind of exponential computational intelligence. This assumes that over time our present weak, narrow AI will achieve the kind of strong, general intelligence that exceeds human intelligence, followed soon after by AI-designed superintelligent machines.
“Why has it proven so difficult for AI researchers to build human-like intelligence, even at a small scale?,” asks Allen. “One answer involves the basic scientific framework that AI researchers use. As humans grow from infants to adults, they begin by acquiring a general knowledge about the world, and then continuously augment and refine this general knowledge with specific knowledge about different areas and contexts. AI researchers have typically tried to do the opposite: they have built systems with deep knowledge of narrow areas, and tried to create a more general capability by combining these systems. This strategy has not generally been successful, although Watson’s performance on Jeopardy! indicates paths like this may yet have promise…”
“And in any case, AI researchers are only just beginning to theorize about how to effectively model the complex phenomena that give human cognition its unique flexibility: uncertainty, contextual sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher-level thought. Just as in neuroscience, the AI-based route to achieving singularity-level computer intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably even whole new research approaches that are incommensurate with what we believe now. This kind of basic scientific progress doesn’t happen on a reliable exponential growth curve. So although developments in AI might ultimately end up being the route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the future.”
No one can tell what life will be like decades from now, but my views are much closer to Allen’s than to Kurzweil’s. As Allen writes in conclusion: “Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.”
Another thought-provoking observation --- thank you for expanding the scope of the application and definition of simple technology to areas touching the broader social sciences. But, I have to admit, after reading in your opening "in the not too distant future", I had a hard time reading the rest of the article without that theme song playing in my mind!
Posted by: Rick Fuchs | March 03, 2015 at 02:37 PM
Thank you for another interesting and thought provoking post on the AI subject. It looks like AI and the Singularity might be a question of “when” rather than “if”. Brain reversing engineering projects around the world, if properly funded, and cognitive computing will nothing but accelerate the understanding of how our brain works and apply this knowledge. Critics denies some fundamental merits of these efforts based on the huge complexity of human brain and the impossibility of getting to a real understanding but I feel they might not take into account the power of cognitive computing and parallel progresses in the theoretical neuroscience. As an analogy it comes to my mind that also studying the universe as whole might looks an impossible task but, as theoretical cosmology has thought us, it is a matter of decreasing the degrees of freedom in the theory to make it a very effective and predictive physical science.
Posted by: Pasquale Di Cesare | March 09, 2015 at 07:58 AM