A recent OpEd in the New York Times caught my attention - The First Church of Robotics by Jaron Lanier, a self-described “computer scientist, composer, visual artist, and author,” who did pioneering research in Virtual Reality in the 1980s. I have met Lanier, and a few years ago participated in a panel on Virtual Worlds which he moderated. In his excellent OpEd, he writes:
“The news of the day often includes an item about some development in artificial intelligence: a machine that smiles, a program that can predict human tastes in mates or music, a robot that teaches foreign languages to children. This constant stream of stories suggests that machines are becoming smart and autonomous, a new form of life, and that we should think of them as fellow creatures instead of as tools.”
“. . . What bothers me most about this trend, however, is that by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people. . . When we think of computers as inert, passive tools instead of people, we are rewarded with a clearer, less ideological view of what is going on - with the machines and with ourselves.”
Jason Lanier’s OpEd inspired me to reflect on the successes and failures of artificial intelligence (AI) over the past several decades, as well as on some of the essential differences between machine and human intelligence.
The term artificial intelligence is often used in quite different ways. At one end, is the more applied kind of AI, which is essentially the application of advanced engineering to machines and systems in particularly clever ways that are inspired by and remind us of human intelligence. At the other end is what is sometimes called strong AI, which aims to develop machines that match or exceed human intelligence and cognitive abilities like reasoning, planning, learning, vision and natural language understanding.
But, while these ambitious AI approaches have met with disappointment, the more applied, focused use of AI techniques has been quite successful. Natural language processing is widely used in commercial voice response systems with limited vocabularies. Industrial robots are used in many manufacturing applications. Computer vision is used in a variety of applications, from security systems to digital cameras.
I think that the biggest breakthrough in these engineering-oriented, AI-ish applications occurred when we switched paradigms. Instead of trying to program computers to act intelligently, - an approach that had not worked so well in the past, - we embraced a statistical, brute force approach based on analyzing vast amounts of information using powerful computers and sophisticated algorithms.
We discovered that such a statistical, information-based approach produced something akin to intelligence or knowledge. Moreover, unlike the earlier programming-based projects, the statistical approaches scaled very nicely. The more information you had, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results. Deep Blue, IBM's chess playing supercomputer, demonstrated the power of such a statistical, brute force approach by beating then reigning chess champion Gary Kasparov in a celebrated match in May of 1997.
Since that time, analyzing or searching large amounts of information has become increasingly important and commonplace in a wide variety of disciplines. Today, most of us use search engines as the primary mechanism for finding information in the World Wide Web. It is amazing how useful these mostly key-word based approaches have proven to be in everyday use. And, beyond these word-oriented search engines, statistical, information-based systems are being extended in a number of directions.
For example, researchers are developing sophisticated question-answering systems, which can successfully analyze the nuances and context embedded in a complex, natural language question and come up with the right answer. Just as it did with the chess-playing Deep Blue in the 1990s, IBM has been developing such an advanced question-answering system named Watson and is testing its capabilities by having Watson compete against some of the best contestants of the television game show Jeopardy!.
We are increasingly using terms like smart and intelligent when referring to highly complex, IT-based engineering systems that make extensive use of information analysis as well as sophisticated modeling and optimization. Such smart/intelligence kind of systems hold great promise to help us deal with our increasingly complex world.
IBM’s Smarter Planet and Smarter Cities are prominent examples of such initiatives. Their underlying premise is that in an increasingly instrumented and interconnected world, we can now gather huge amounts of information. Through sophisticated analysis using powerful supercomputers, we can then turn these mountains of information into real insights to guide our actions, and help us make better decisions, as well as better manage all the various things, processes and systems all around us.
But while information-based intelligence is making our machines increasingly powerful and useful, we are also discovering their limitations.
Insights, predictions and models based on analysis of historical data work very well for normal events – that is, events that follow a normal probability distribution, where, given enough information, the future can be fairly accurately predicted from past behaviors. But such analytical methods do not work so well in the world of highly complex systems, especially those systems whose components and interrelationships are themselves quite complex. In such systems, the dynamic nature of the components, as well as their intricate interrelationships renders them increasingly unpredictable and accounts for their emergent behavior, as is the case with systems and evolutionary biology.
As a result of the Internet, we are now dealing with fast-changing, globally integrated, complex organizational systems in a number of areas of human endeavor, from finance to supply chains, from the propagation of news items to the marketing of products. Before the Internet brought the world much closer together, changes traveled relatively slowly and often became attenuated and less intense before reaching other parts of the system. Not so today. In our increasingly integrated world, changes propagate around the world in milliseconds, and will often lead to non-linear, unpredictable behaviors.
In a recent CEO Study conducted by IBM, the participating CEOs said that their top concern is the highly volatile, uncertain and complex environment in which they are doing business around the world, an environment that they expect will grow significantly more complex over time.
How do you then deal with such an unpredictable, complex business and societal environment? Our IT-based tools are a huge help if properly applied. But they are not sufficient. You need human judgment, experience and intelligence.
The CEO Study identified creativity as the most important quality needed to succeed in today’s business environment. It is creativity that enables business leaders to solve complex problems, come up with innovative ideas and develop the appropriate strategies throughout their organizations.
A similar conclusion was reached by Roger Martin, dean of the Rotman School of Management at the University of Toronto in a New York Times article published in January of 2010:
“ . . . students need to learn how to think critically and creatively every bit as much as they need to learn finance or accounting. More specifically, they need to learn how to approach problems from many perspectives and to combine various approaches to find innovative solutions . . . Even before the financial upheaval last year, business executives operating in a fast-changing, global market were beginning to realize the value of managers who could think more nimbly across multiple frameworks, cultures and disciplines.”
I find it fascinating that the more powerful our machines and the better our tools, the more we are learning about the key differences between machine and human intelligence. Our intelligent machines will continue to help us deal with the increasingly complex world around us. But they can do so only up to a point. As we have been learning, our machines, don’t do so well when dealing with problems which exhibit emergent behavior and are intrinsically unpredictable. When dealing with such problems, you need experience, creativity and critical thinking. You need human intelligence,
Human intelligence itself has continued to evolve for the past few million years as a key dimension of the continuing evolution of our species. And, through all this time, perhaps nothing has stimulated the evolution of our brains and intelligence than the tools that we have been creating to help us deal with the challenges of our environment.
In the end, it is quite likely that the reason our increasingly intelligent tools have not been able to match or surpass human intelligence - and perhaps never will, - is because they are so complementary to each other. As our intelligence evolves, we develop better and better tools to deal with the complex world around us; these tools in turn, have an impact on the evolution of our own intelligence. Thus, rather than competing with or replacing human intelligence, our intelligent machines, are propelling us into exciting, as well as emergent, unpredictable and very human dimensions.
Artificial intelligence can (or will), as you suggest, never replace human intelligence's reason and evaluation capabilities. One good example is for example computer predictions in different aspects of the stock and currency market. Sometimes they are doing the right thing, but it really has to be a human being behind a final decision to either buy or sell in these markets.
Posted by: Andre at Internet Marketing | August 29, 2010 at 04:20 AM