The June 13 issue of The Economist included an in-depth look at the limits of AI, with seven articles on the subject. “There is no question that AI - or, to be precise, machine learning, one of its sub-fields - has made much progress,” notes The Economist in the issue’s overview article. “Computers have become dramatically better at many things they previously struggled with… Yet lately doubts have been creeping in about whether today’s AI technology is really as world-changing as it seems. It is running up against limits of one kind or another, and has failed to deliver on some of its proponents’ more grandiose promises.”
Transformative technologies, - remember the dot-com bubble, - are prone to hype cycles, when all the excitement and publicity accompanying their early achievements often lead to inflated expectations, followed by disillusionment if the technology fails to deliver. But AI is in a class by itself, as the notion of machines achieving or surpassing human levels of intelligence has led to feelings of both wonder and fear over the past several decades.
The article reminds us that AI has gone through two such major hype cycles since the field began in the mid-1950s. Early achievements, - like beating humans at checkers and proving logic theorems, - led researchers to conclude that machines would achieve human-level intelligence within a couple of decades. This early optimism collapsed leading to the first so-called AI winter from 1974-1980. The field was revived in the 1980s with the advent of commercial expert systems and Japan’s Fifth Generation project, but it didn’t last long, leading to the second AI winter from 1987-1993.
A different paradigm emerged in the mid-1990s, as AI embraced a statistical approach based on analyzing large amounts of data with powerful computers and sophisticated algorithms. Deep Blue, IBM’s chess playing supercomputer, demonstrated the power of this statistical approach in 1997 when it won a celebrated chess match against then reigning champion Garry Kasparov.
More recently, machine learning advances like deep learning, have enabled computers to acquire knowledge on almost any subject by analyzing huge amounts of data. Machine learning methods are now being applied to vision, speech recognition, language translation, and other capabilities that not long ago seemed impossible but are now approaching or surpassing human levels of performance in a number of domains.
“Billions of people use it every day, mostly without noticing, inside their smartphones and internet services,” says The Economist. “Yet despite this success, the fact remains that many of the grandest claims made about ai have once again failed to become reality, and confidence is wavering as researchers start to wonder whether the technology has hit a wall.” Promised breakthroughs like medical diagnoses and self-driving cars are taking longer than expected. In addition, recent surveys are finding that most companies have had trouble implementing, let alone getting value from AI solutions.
Companies hoping to realize AI’s potential must confront a number of key challenges, argues The Economist. Let me summarize a few of these challenges.
Deep learning is data hungry. Training machine learning algorithms requires large amounts of data, - the bigger the training data sets, the more accurate the learning. One of deep learning’s key features is that, unlike classic analytic methods, there’s no asymptotic data size limit beyond which it stops improving.
In theory, this is not a problem, as the world is now awash with data. But, in practice, data issues are one of the thorniest challenges in any AI project. First is getting ahold of the required data, which may not exist or may only be available from companies that consider data one of their key competitive assets. Once available, getting data ready to train machine learning algorithms is time consuming, taking up to 80% of a typical AI project, with tasks like cleansing and labeling.
Bias is another source of problems. Given that AI algorithms are trained using data collected over time, if the data include past racial, gender, or other biases, the predictions of these AI algorithms will reflect these biases. Another bias related issue was uncovered last year by the US National Institute of Standards and Technology (NIST) when it tested nearly 200 facial recognition algorithms and found that they were significantly less accurate at recognizing African Americans, Asians and native American faces compared to white faces.
Deep learning requires considerable computing power. Recent advances in AI computational power have been truly impressive. Prior to 2012, AI performance doubled every two years, closely tracking Moore’s Law. Post-2012, AI performance has been doubling every 3.4 months, - due to the development of a variety of AI accelerators, - advancing 300,000-fold by 2018.
Costs have similarly fallen for relatively simple AI applications. But, the computing power required by the newest AI systems can be expensive, because the volume of data available for training continues to grow exponentially. At the cutting edge, a deep learning algorithm that’s been trained with 10 billion documents or images will perform significantly better than one trained with only one billion, but the costs in computing power, including electricity consumption, will also be significantly higher. These rising costs are a growing problem for small and mid-size companies, especially AI startups trying to get off the ground.
Deep learning is quite shallow. “[D]eep-learning approaches are fundamentally statistical, linking inputs to outputs in ways specified by their training data,” notes The Economist. “That leaves them unable to cope with what engineers call edge cases - unusual circumstances that are not common in those training data.” AI methods “are powerful pattern-recognition tools, but lack many cognitive abilities that biological brains take for granted. They struggle with reasoning, generalising from the rules they discover, and with the general-purpose savoir faire that researchers, for want of a more precise description, dub common sense. The result is an artificial idiot savant that can excel at well-bounded tasks, but can get things very wrong if faced with unexpected input.”
AI systems are brittle. They do best when dealing with data that closely resemble that used in their training, but much less well when attempting to generalize or extrapolate beyond. They’re easy to fool with slight perturbations to the training inputs that wouldn’t fool humans because we’re much more resilient to minor changes. This major difference between biological and artificial neural networks pose a profound challenge to the applicability of AI in critical areas like clinical medicine and autonomous vehicles.
From AI summer to AI autumn? “The current bout of enthusiasm has been the biggest yet,” says The Economist in its concluding article. While some worry that as AI’s limits become apparent, another big bust might be coming, another full-blown winter is unlikely, the article adds. The current summer is brighter and warmer than previous ones because AI has been so successful and widely deployed. Instead, we could be facing a kind of AI autumn, akin to the downturn the IT industry went through in the early 2000s following the burst of the dot-com bubble, before resuming its advances a few years later.
Like other transformative technologies, AI is both powerful and limited. “As people become familiar with ai’s peculiar mix of power and fragility they may be reluctant to trust it with important decisions;… instead of asking what ai can do, humans need to think about what it should do. The technological limits of naive, fallible ai, in other words, will lead humans to impose additional political and social limits upon it. Clever algorithms will have to fit into a world that is full of humans, and, in theory at least, run by them.”
Very well put, thank you!
Posted by: Mark Bolzern | October 17, 2020 at 01:07 PM
Excellent thinking and valuable conclusion
Posted by: Hansueli Maerki | October 24, 2020 at 11:21 AM