On October 21 I participate in a Colloquium on the Frontiers of IT at IBM’s Thomas J Watson Research Center. The colloquium, - which was part of IBM’s Centennial celebration, - brought together experts across industry and academia to discuss and debate the direction of four key Grand Challenges at the very leading edge of IT: nano systems, exascale, big data, and cognitive computing.
John Kelly, senior VP and director of IBM Research, gave an overview of these four areas. Here is a similar version of his talk given a week earlier at the University of Melbourne. John explained that these four areas have the potential to transform the IT industry because of their exponential growth, a result of both continual improvements and disruptive innovations. “Exponential curves,” he said, “will either put you ahead of the competition of kill you. It’s one of the other.”
Over the next decade, nano-devices are expected to advance by three orders of magnitude, from a billion to a trillion transistors in a chip. We will be able to design sophisticated, powerful nano systems-on-a-chip that will be totally contained within such a trillion transistor nano device. To do so, we will have to shift from silicon to other carbon-based materials. This requires disruptive technologies and innovations at all levels, including new materials, fabrication processes and design tools. The work is underway.
Exascale computing is also expected to advance by three orders of magnitude over the next decade or so. Having broken the petascale barrier a few years ago, the supercomputing community has its sights set on exascale systems. There are many challenges involved in developing such systems, foremost among them being power consumption.
Today’s most powerful supercomputers consume roughly between 1 and 3 megawatts per petaflop. It is generally agreed that an exascale-class system must consume no more than 10-20 megawatts, otherwise you would need a whole power plant alongside each such system, and their operating costs would be prohibitively expensive. Thus, the 1000-fold increase in performance from petascale to exascale must be achieved with no more than a 10-fold increase in overall power consumption. This means that just about all components of the system, - including its processor, memory, communications and software, - must be redesigned to achieve the required two order of magnitude improvements in power consumption.
The third major area, big data, could grow even faster than the previous two, - between three and six orders of magnitude in the next decade. This will happen as we look to analyze and extract meaningful information from unstructured data, people-based social networks and Internet-of-Things applications. Major advances must be made not just in the volumes and variety of data collected and analyzed, - including text, voice, images and video, - but also in the speed of the analysis, as many emerging smart applications require very fast response times.
The fourth grand challenge, - cognitive computing, - is enabled by the exponential advances of the previous three, but is itself more qualitative and subtle in nature. John framed the discussion by observing how our information processing machines have evolved over the past hundred years or so. In the beginning, we had simple tabulating machines which could only count and do simple arithmetic. Then, came the age of computing and programmable systems of the past sixty years or so. We are now on the verge of a new era, with cognitive computing systems that can deal with huge volumes of information, understand natural language and are capable of learning.
Watson, IBM’s question-answering system that earlier this year won the Jeopardy! Challenge against the two best human Jeopardy! players, is one of the most advanced such cognitive systems. John finished his talk by pointing out the many challenges ahead of us as we design ever more advanced smart, learning systems. For example, it took Watson 85,000 watts of power to beat two guys with 20 watts of power each in their brains. We still have much to learn from biology.
David Ferrucci, IBM Fellow and principal investigator of Watson and Deep QA, followed John Kelly and gave a talk on Artificial Intelligence - the Promise Revisited. Dave observed that the initial academic projects in artificial intelligence (AI) were characterized by wild expectations, as researchers way underestimated the difficulty of developing such systems. The situation is now very different, as there is a much better appreciation of both the potential and the challenges involved.
When dealing with a subject like AI, it is very important to have the right set of expectations, both what can and what cannot be achieved. I very much liked Dave’s succinct formulation of his expectations for AI and its relationship to human intelligence:
Human intelligence originates meaning. Artificial intelligence detects human meaning.
Meaning is subjective and created by humans. We are constantly interpreting and extracting meaning when we communicate with another person, read something or observe the world around us. When engaged in a conversation, if we are not sure what the other person means, perhaps because we do not understand specific words or the overall context of the conversation, we will ask questions to clarify what the intended meaning might be and hopefully arrive at a mutual understanding.
Doing well in Jeopardy! was chosen as as concrete proof-of-concept for Watson because the whole objective of the game is to decipher the relative meaning of words and phrases across a broad domain. If Watson could infer the meaning of all the sophisticated clues, word-plays and puns used in Jeopardy!, then it could move on to help people make better decisions when faced with highly complex problems whose very meaning is not clear and precise.
Dave talked about the need for developing cognitive systems like Watson. The complexity of many human endeavors, - including medical diagnoses, financial advice, formulating business strategy or setting government policies, - has outgrown our ability to make good decisions on our own.
First of all, there is often no one answer, as there might be in a highly structured problem or question. It is a matter of evaluating multiple options and deciding which one best fits the problem. An expert will typically consider a few of the most likely options and make a decision. The expert will generally be right most of the times, but may have trouble when faced with a new or infrequently occurring problem beyond the scope of the most likely options.
A system like Watson, on the other hand, can analyze many thousands of options at the same time, including the large number of infrequently occurring ones as well as ones that the expert has never seen before. It evaluates the probability of each option being the answer to the problem, and then offers to the human the list of most likely options, that is, those with the highest probabilities.
The human expert, say a physician evaluating a rare medical condition, would then see that there are a number of alternative diagnoses with relatively high probabilities that should be further pursued. The physician engages in a dialogue with Watson to understand why it gave these options such a high probability, and to further explore its applicability by giving Watson additional information about the patient that perhaps may have previously seem irrelevant.
Why is it so important for Watson to deal with unstructured content, that is, natural language text, speech, images, and so on? Much of the information needed to make complex decisions and learn is created for humans by humans in the form of articles, books, talks, presentations, and so on. Little of it is highly structured.
Watson would not be of much value if it could only analyze structured information. For Watson to be useful to people in analyzing and extracting meaning out of all the vast information at its disposal, it must deal with the information in the very ways that the humans created it and originated the meaning Watson is now analyzing, namely, natural language.
In our increasingly connected and fast changing world, we will encounter many more complex problems that are beyond the ability of the vast majority of even the best human experts acting on their own. Such problems generally involve very complex systems, where the dynamic nature of their components, as well as their intricate interrelationships renders them increasingly unpredictable and accounts for their emergent behavior.
Watson’s massively parallel architecture enables it to analyze many competing hypothesis against its very large content. Because its knowledge base comes from many different sources, it is able to analyze how many independent human experts would attack the problem, consider a very broad range of content, and then combine and balance its recommendations. It is essentially tapping into the collective intelligence of the best experts in a field to answer a question or solve a problem that none of them alone is equipped to handle.
The development of practical, useful cognitive systems is one of the most exciting and important grand challenges in the decades ahead. It will require advances in several key areas, - nano devices, exascale, big data, and a number of others - each of which is a grand challenge of its own. And, it represents a natural next step in the history of human progress: the development of better and better tools to help us deal with the increasingly complex world around us.
Ah, to be a fly on the wall at a conference to hear the expression of such thinking. I am thrilled to know that the thinking is ongoing and especially gratified that our former employer is at the forefront. The contextualizing of Watson should be shared more widely. Thanks for your surrogate work in keeping us informed. Great article.
Posted by: Bud Byrd | November 05, 2011 at 12:11 PM
"it took Watson 85,000 watts of power to beat two guys with 20 watts of power each in their brains."
That doesn't even take into account the area-under-the-curve (energy) required to assemble the information that Watson used. Watson had a massive database of information to draw on, how much energy was expended to acquire, assemble and index that data?
The efficiency of biology is yet another level beyond even what the article suggests.
Posted by: Ed | November 07, 2011 at 09:57 AM