Tools have played a critical role in human evolution for a very, very long time. As the Tools entry in Wikipedia observes:
“Tools are the most important items that the ancient humans used to climb to the top of the food chain; by inventing tools, they were able to accomplish tasks that human bodies could not, such as using a spear or bow and arrow to kill prey, since their teeth were not sharp enough to pierce many animals' skins . . . The transition from stone to metal tools roughly coincided with the development of agriculture around the 4th millennium BC. Mechanical devices experienced a major expansion in their use in the Middle Ages with the systematic employment of new energy sources: water (waterwheels) and wind (windmills). . . Machine tools occasioned a surge in producing new tools in the industrial revolution. Advocates of nanotechnology expect a similar surge as tools become microscopic in size.”
Beyond extending our physical capabilities, tools have played a major role in the evolution of our brain and mental powers: “Using tools has been interpreted as a sign of intelligence, and it has been theorized that tool use may have stimulated certain aspects of human evolution - most notably the continued expansion of the human brain.”
Money, for example, was first invented as a tool to facilitate commerce about 2500 years ago, and has since had a huge impact in the formation of business, economies and governments. The advent of the printing press in the 15th century is another example, which led to the wide dissemination of the written word in the form of books and newspapers. In the last two centuries we have seen the emergence of a wide variety of communication and media technologies, which in turn led to the development of many new tools, like the telegraph and telephone, the phonograph and film, and radio and TV.
The digital technology revolution of the past fifty years has taken our information-based, intelligence-enhancing tools to a whole new level. As digital components continue to become ever more powerful, inexpensive and ubiquitous, our IT systems are growing in breadth and capabilities at a dramatic rate. Moreover, using these increasingly powerful IT tools, we are now able to address significantly more sophisticated and complex problems than ever - in science, engineering, business and society in general.
We keep turning to biology as a metaphor and inspiration to help us understand the increasing power of the tools around us. Paralleling the evolutionary Cambrian Explosion that took place around half a billion years ago, our own digital systems and tools seem to now be undergoing their own kind of Cambrian Explosion. Nowhere does this comparison of machines to humans generate more interest than in matters relating to artificial intelligence, perhaps because we are seeing our tools begin to approach, and perhaps some day surpass, our most distinguished human capability.
The recent broadcast of the IBM Jeopardy! Challenge generated lots of discussions on the subject. The Challenge pitted Watson, IBM’s Question Answering computer, in competition against the two best human Jeopardy! players, Ken Jennings and Brad Rutter. Watson won the Jeopardy! Challenge, renewing speculation that we may be getting closer to the world of strong AI, when our machines will match or exceed human intelligence.
In a very good article in the February 14 Science section of the New York Times, John Markoff put Watson in perspective against the two main currents of AI research over the past forty years.
“At the dawn of the modern computer era, two Pentagon-financed laboratories bracketed Stanford University. At one laboratory, a small group of scientists and engineers worked to replace the human mind, while at the other, a similar group worked to augment it. . . For the past four decades that basic tension between artificial intelligence and intelligence augmentation - A.I. versus I.A. - has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.
“Now, as the pace of technological change continues to accelerate, it has become increasingly possible to design computing systems that enhance the human experience, or now - in a growing number of cases - completely dispense with it. The implications of progress in A.I. are being brought into sharp relief now by the broadcasting of a recorded competition pitting the IBM computing system named Watson against the two best human Jeopardy! players, Ken Jennings and Brad Rutter.”
Many of the early AI leaders in the 1960s and 1970s were convinced that you could build a machine as intelligent as a human being within a generation, and obtained considerable government funding to implement their vision. Eventually it became clear that all these various projects had grossly underestimated the difficulties of developing machines exhibiting general human intelligence.
But a number of very smart scientists and technologists continue to believe that we are fast approaching a post-human technological Singularity, when computers will far surpass humans in intelligence. Ray Kurzweill is among the most prominent advocates of such a future Singularity. According to this recent Time article, “2045: The Year Man Becomes Immortal”:
“Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity - our bodies, our minds, our civilization - will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.”
While these ambitious, general AI approaches have, so far, met with disappointment, the more applied, intelligence augmentation, focused use of AI techniques has been quite successful. The biggest breakthrough in these engineering-oriented, AI-ish applications occurred when we switched paradigms. Instead of trying to program computers to act intelligently - an approach that had not worked so well in the past - we embraced a statistical, brute force approach based on analyzing vast amounts of information using powerful computers and sophisticated algorithms.
We discovered that such a statistical, information-based approach produced something akin to intelligence. Moreover, unlike the earlier programming-based projects, the statistical approaches scaled very nicely. The more information you had, the more powerful the computers, the more sophisticated the algorithms, the better the results. Deep Blue - IBM's chess playing supercomputer that defeated then reigning chess champion Garry Kasparov in a celebrated match in May of 1997 - and Watson are very much in this applied, engineering-oriented camp.
According to this view of AI, our computers are productivity and intelligence augmentation tools, not unlike the steam engines, cars and airplanes that make up for our physical limitations by enhancing respectively our limited power and slow speed, and giving us the ability to fly. Watson is but the latest example of tools like the World Wide Web, search technologies and advanced analytics that are helping us cope with and take advantage of the explosion of information all around us.
In such a view, we are continuing to co-evolve along with our tools, as we have done from time immemorial. “We shape our tools and they in turn shape us,” observed noted author and educator Marshal McLuhan in the 1960s. Moreover, computer systems like Watson are increasingly giving us the ability to leverage our collective intelligence, by gathering the best available information from all possible sources in a domain or discipline, and extracting knowledge and insights through powerful and sophisticated analysis.
You don’t have to accept the Singularity’s premise that computers will far surpass us in intelligence by the middle of this century to believe that our increasingly intelligent digital devices are having a huge impact on all aspects of business, society and our personal lives. Thirty years ago few had personal computers, fifteen years ago few used the World Wide Web, and five years ago there were relatively few smartphone users. These tools are now ubiquitous, and becoming more powerful and intelligent seemingly by the day. We clearly created them, but how will they in turn shape our economies, our institutions and everything around us over the next several decades? How are they shaping the way we think, work and deal with each other? What impact might they have on the evolution of our brains over time?
These are important and profound questions. We don’t know the answers, any more than we could have predicted the impact of the Web and the PC even though they were invented in the relatively near past, let alone the impact of the airplane, the automobile and steam engines in previous generations.
It is likely that every generation, when contemplating its own disruptive technologies and innovations, feels that the world as they have known it is about to undergo drastic changes and nothing will ever be the same. They are right, of course. But so far, we have found that once that future arrives, it feels much less foreign than we originally anticipated because we and society co-evolved right along with our newly created tools and learned to use them to solve ever more complex problems and invent even more powerful tools.
I suspect that this co-evolution of humans and our tools will continue well into the future.
Comments