Tools have played a central role in human evolution since our ancestors first developed hand axes and similar such stone tools a few million years ago. Ever since, we’ve been co-evolving right alongside the tools we create. “We shape our tools and they in turn shape us,” observed noted author and educator Marshal McLuhan in the 1960s.
The Industrial Revolution led to dramatic improvements in productivity and standard of living over the past two hundred years. This is due largely to the machines we invented to make up for our physical limitations - the steam engines that enhanced our physical power, the railroads and cars that made up for our slow speed, and the airplanes that gave us the ability to fly.
Similarly, for the past several decades computers have been augmenting our intelligence and problem solving capabilities. And, according to IBM’s John Kelly and Steve Hamm, there is much more to come. In Smart Machines: IBM’s Watson and the Era of Cognitive Computing, Research director John Kelly and writer and strategist Steve Hamm, note that “We are at the dawn of a major shift in the evolution of technology. The changes that are coming over the next two decades will transform the way people live and work, just as the computing revolution has transformed the human landscape over the past half century. We call this the era of cognitive computing.”
Next came the programmable computing era which emerged in the 1940s. Most computers being used today are based on the Von Neumann architectural principles laid out in 1945 by mathematician John von Neumann. Any problem that can be expressed as a set of instructions can be codified in software and executed in such stored-program machines. This architecture has worked very well for many different kinds of scientific, business, government and consumer applications. But, its very strength, the ability to break down a problem into a set of instructions to be embedded in software, is proving to be its key limitation in the emerging world of big data.
Digital technologies are now found all around us, from the billions of mobile devices carried by almost every person in the planet to the explosive growth of what McKinsey is now calling the Internet of All Things. These digital devices are generating gigantic amounts of information every second of every hour of every day, and we are now asking our computers to help us make sense of all this data. What is it telling us about the environment we live in? How can we use it to make better decisions? Can it help us understand our incredible complex economies and societies?
This kind of data-driven computing is radically different from the instruction-driven computing we’ve been living with for decades. And, in fact, such data-driven, sense-making, insight-extracting, problem-solving cognitive computers seem to have more in common with the structure of the human brain than with the architecture of a classicVon Neumann machine. But, while inspired by the way our brains process and make sense of information, the objective of cognitive machines is not to think like a human, something we barely understand.
Rather, we want our cognitive machines to deal with large amounts and varieties of unstructured information in real time. Our brains have evolved to do so quite well over millions of years. But our brains can’t keep up with the huge volumes of information now coming at us from all sides. So, just like we invented industrial machines to helps us overcome our physical limitations, we now need to develop a new generation of machines to help us get around our cognitive limitations.
The quest for machines that exhibit the kind of problem solving intelligence we associate with humans is not new. Artificial intelligence (AI) was one of the hottest areas in computer sciences in the 1960s and 1970s. Many of the AI leaders in those days were convinced that you could build a machine as intelligent as a human being within a couple of decades. They were trying to do so by somehow programming the machines to exhibit intelligent behavior, even though to this day we have no idea what intelligence is, let alone how to translate intelligence into a set of instructions to be executed by a machine. In the 1980s, the Japanese government even mounted a major national program, the Fifth Generation Computer Project, to develop highly sophisticated AI-like machines and programming languages. After years of unfulfilled promises, a so called AI winter of reduced interest and funding set in.
But, while these ambitious AI approaches met with disappointment, a more applied, focused use of AI techniques was making progress, such as the use of natural language processing with limited vocabularies in voice response systems and the development of industrial robots for manufacturing applications. The biggest breakthrough in these engineering-oriented, AI-ish applications occurred when we switched paradigms. Instead of trying to program computers to act intelligently, - an approach that had not worked so well in the past, - we embraced a statistical, brute force approach based on analyzing vast amounts of information using powerful computers and sophisticated algorithms.
We discovered that such a statistical, information-based approach produced something akin to intelligence or knowledge. Moreover, unlike the earlier programming-based projects, the statistical approaches scaled very nicely. The more information you had, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results. Deep Blue, IBM’s chess playing supercomputer, demonstrated the power of such a statistical, brute force approach by beating then reigning chess champion Gary Kasparov in a celebrated match in May, 1997.
Since that time, analyzing or searching large amounts of information has become increasingly important and commonplace in a wide variety of disciplines. Today, most of us use search engines as the primary mechanism for finding information in the World Wide Web. It is amazing how useful these mostly key-word based approaches have proven to be in everyday use. And, beyond these word-oriented search engines, statistical, information-based systems are being extended in a number of directions.
In February, 2011, Watson, IBM’s question-answering system won the Jeopardy! Challenge against the two best human Jeopardy! players. Watson demonstrated that computers could now extract meaning from the unstructured knowledge developed by humans in books, articles, newspapers, web sites, social media, and anything written in natural language. Watson dealt with the information much as a human would, analyzing multiple options at the same time, considering the probability that each option was the answer to the problem it was dealing with, and then selecting the option with the highest probability of being right.
This is pretty much how a human experts would make decisions in endeavors like medical diagnoses, financial advice, customer service or strategy formulation. The human experts will typically consider a few of the most likely options based on the knowledge they have and make a decision. They will generally be right most of the times, but may have trouble when faced with a new or infrequently occurring problem beyond the scope of the most likely options. Also, we all have biases based on our personal experiences that make it hard to consider options beyond the scope of our intuition.
A cognitive system, on the other hand, can analyze many thousands of options at the same time, including the large number of infrequently occurring ones as well as ones that the expert has never seen before. It evaluates the probability of each option being the answer to the problem, and then comes up with the most likely options, that is, those with the highest probabilities. Moreover, the cognitive system has access to huge amounts of information of all kinds, both structured and unstructured, including not only books and documents, but also speech, pictures, videos and so on. These cognitive systems are truly beginning to augment our human cognitive capabilities much as earlier machines have augmented our physical ones.
“Cognitive systems will extract insights from data sources from which we acquire almost no insight today, such as population-wide health care records, or from new sources of information, such as sensors monitoring pollution in delicate marine environments,” write Kelly and Hamm. “Such systems will still sometimes be programmed by people using if A, then B logic, but programmers won’t have to anticipate every procedure and every rule that will be required. Instead, computers will be equipped with interpretive capabilities that will make it possible for them to learn from the data and evolve over time as they gain new knowledge or as the demands on them change.”
“The goal isn’t to replicate human brains, though. This isn’t about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results - each bringing their own superior skills to the partnership. The machines will be more rational and analytic - and, of course, possess encyclopedic memories and tremendous computational abilities. People will provide judgment, intuition, empathy, a moral compass and human creativity.”
In the end, this is today’s version of the quest that drove our ancestors to start developing stone tools a few million years ago, and that inspired the inventors of the many machines developed over the past few hundred years. We just want our smart machines to make us smarter.
Wonderful article thank you.
What I find particularly worrying is that in the near future this technology will allow many cognitive computers to be *right* almost all the time.
This will have profound implications precisely because the machines will be able to make a correct choice without falling in a "bias trap". Human biases are what create opportunities, because not all human beings read and process available information in the same manner and therefore come to different conclusion, which result in different immediate and strategic choices.
Some of these will be successful, to the benefit of their adopters - others will fail, and this failure will benefit another group of individuals.
In the natural domain we all know there is no single biological species which always wins. Sometimes the lion succumbs to a virus, sometimes a colony of microorganisms will die because the local environment has drastically changed.
But what happens when cognitive computing will allow someone to have the benefit of the highest success rate?
This idea makes me cringe to be honest.
I can imagine that the way out will be to have either more and faster computing power - resulting in even shorter boom-bust cycles - or to win by brute force, i.e. by "physically removing" the adversary from the game.
Kind regards
Paul
Posted by: Paul | June 27, 2013 at 05:02 AM
Great article. You can see the strong move to cognitive computing at large government departments and agencies. They need to develop appropriate tools that will not only aggregate large amounts of data, but extract useful information and create interfaces that allow for optimum display and use of the data.
Similarly, corporations can adapt the technology to help predict consumer trends and analyze enormous amounts of market data.
One area that I am feel is not yet fully being addressed in this fashion is education. Data-driven computing has the potential to unlock an unprecedented growth in individual education and level the playing field across income classes. The power of cognitive computing in this case is not in the ability to help students solve problems, but rather in the ability of the technology to extract insight into the process of individualized learning. The technology has progressed to the necessary level - how it makes its way into the classroom remains a mystery...
Best Regards,
Matt
Posted by: Matt | July 02, 2013 at 05:39 PM