AI is rapidly becoming one of the most important technologies of our era. Every day we can read about the latest AI advances from startups and large companies, including applications that not long ago were viewed as the exclusive domain of humans. Over the past few years, the necessary ingredients have finally come together to propel AI beyond the research labs into the marketplace: powerful, inexpensive computer technologies; huge amounts of data; and advanced algorithms.
Machine learning and related algorithms like deep learning, have played a major role in AI’s recent achievements. Machine learning gives computers the ability to learn by ingesting and analyzing large amounts of data instead of being explicitly programmed. It’s enabled the construction of AI algorithms that can be trained with lots and lots of sample inputs, which are subsequently applied to difficult AI problems like language translation, natural language processing, and playing championship-level Go.
Over the past several decades, AI has been besieged by rounds of hype which over-promised, under-delivered, and nearly killed the field. Once more, a recent Gartner article positioned machine and deep learning at the top of their hype cycle, - when all the excitement, publicity and promising potential often leads to a peak of inflated expectations, risking falling into the trough of disillusionment if the technology fails to deliver. Now that AI is finally reaching a tipping point of market acceptance, it’s particularly important to be cautious and not repeat past mistakes.
In Artificial Intelligence - The Revolution Hasn’t Happened Yet, UC Berkeley professor Michael I. Jordan aims to inject such a note of caution. “AI has now become the mantra of our current era… The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us - enthralling us and frightening us in equal measure. And, unfortunately, it distracts us… Whether or not we come to understand intelligence any time soon, we do have a major challenge on our hands in bringing together computers and humans in ways that enhance human life.”
The machines of the industrial economy made up for our physical limitations, - steam engines enhanced our physical power, railroads and cars helped us go faster, and airplanes gave us the ability to fly. Similarly, the machines of the 21st century digital economy are making up for our cognitive limitations, - augmenting our intelligence, problem solving capabilities and ability to process vast amounts of information. Machine and deep learning are the latest examples of tools like the World Wide Web, search and analytics that are helping us cope with and take advantage of the huge amounts of information all around us.
Jordan argues that we’re now witnessing the creation of a new branch of engineering that will help us in the development of AI systems and applications, much as over a century ago, civil engineering helped us develop tall buildings and bridges, and mechanical engineering helped us develop cars and airplanes. Before the advent of these engineering disciplines, buildings and bridges were developed in fairly ad-hoc ways, and were much less safe and subject to collapsing in unforeseen ways. And early cars and airplanes were significantly more prone to breakdowns and crashes. But, over time, engineering advances have led to foundational scientific principles, development practices and building blocks that significantly increased their safety and productivity.
We’re still in the ad-hoc, early stages of AI. Practices, building blocks and tools have started to emerge, as have a number of sophisticated mathematical techniques such as those underlying deep learning. “What we’re missing is an engineering discipline with its principles of analysis and design…”, notes Jordan. “Moreover, since much of the focus of the new discipline will be on data from and about humans, its development will require perspectives from the social sciences and humanities.”
AI as a field of academic research was born in 1956. The field’s founders aspired to develop machines with human-level intelligence. They believed that such human-imitative AI, - capable of high-level cognitive capabilities including reasoning and thinking, - would be achieved within a generation. “Sixty years later, however, high-level reasoning and thought remain elusive,” says Jordan. “The developments which are now being called AI arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics - the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions.”
The past two decades have seen major progress in AI, but it’s not come from human-imitative AI. Rather, the progress has come from the applications of AI in engineering, - often referred to as intelligence augmentation (IA), where computers and data are used to create IT-based tools that augment human capabilities. In addition, the emergence of the Internet of Things and applications like smart cities and smart manufacturing, are giving rise to system-wide intelligent infrastructures.
For the foreseeable future we have to rely on intelligence augmentation, intelligent infrastructures and related engineering approaches to continue to advance AI. As Jordan writes, “although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited - we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering.” Moreover, the most challenging problems areas - e.g., healthcare, transportation, finance, education, government, - require highly complex engineering and systems-oriented advances. “It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction.”
“While industry will continue to drive many developments, academia will also continue to play an essential role, not only in providing some of the most innovative technical ideas, but also in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed - notably the social sciences, the cognitive sciences and the humanities.”
“On the other hand, while the humanities and the sciences are essential as we go forward, we should also not pretend that we are talking about something other than an engineering effort of unprecedented scale and scope - society is aiming to build new kinds of artifacts. These artifacts should be built to work as claimed. We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work - that they make errors that take their toll in terms of human lives and happiness.”
In the end, concludes Jordan, “we should embrace the fact that what we are witnessing is the creation of a new branch of engineering… we have a real opportunity to conceive of something historically new - a human-centric engineering discipline.”
Comments