A recent OpEd in the New York Times caught my attention - The First Church of Robotics by Jaron Lanier, a self-described “computer scientist, composer, visual artist, and author,” who did pioneering research in Virtual Reality in the 1980s. I have met Lanier, and a few years ago participated in a panel on Virtual Worlds which he moderated. In his excellent OpEd, he writes:
“The news of the day often includes an item about some development in artificial intelligence: a machine that smiles, a program that can predict human tastes in mates or music, a robot that teaches foreign languages to children. This constant stream of stories suggests that machines are becoming smart and autonomous, a new form of life, and that we should think of them as fellow creatures instead of as tools.”
“. . . What bothers me most about this trend, however, is that by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people. . . When we think of computers as inert, passive tools instead of people, we are rewarded with a clearer, less ideological view of what is going on - with the machines and with ourselves.”
Jason Lanier’s OpEd inspired me to reflect on the successes and failures of artificial intelligence (AI) over the past several decades, as well as on some of the essential differences between machine and human intelligence.
The term artificial intelligence is often used in quite different ways. At one end, is the more applied kind of AI, which is essentially the application of advanced engineering to machines and systems in particularly clever ways that are inspired by and remind us of human intelligence. At the other end is what is sometimes called strong AI, which aims to develop machines that match or exceed human intelligence and cognitive abilities like reasoning, planning, learning, vision and natural language understanding.