Several weeks ago, I wrote about the event I attended on February 28 to celebrate the launch of MIT’s new Schwarzman College of Computing, - MIT’s strategic response to the rise of artificial intelligence, - a technology that will reshape “geopolitics, our economy, our daily lives and the very definition of work” in the decades to come. The all-day celebration featured talks and panels on a wide variety of topics, some focused on innovative applications of AI technologies, others on the challenging issues raised by these powerful technologies.
A few weeks later I wrote about one of these challenging issues, the impact of AI on our social interactions, based on the talk by MIT professor Sherry Turkle on Rethinking Friction in Digital Culture, and a related article by Yale professor Nicholas Christakis on How AI Will Rewire US.
I now want to discuss the interview conducted by NY Times columnist Thomas Friedman with former US Secretary of State Dr. Henry Kissinger at the February 28 event. The interview, - which can be seen in this video, - was based on a June, 2018 article by Dr. Kissinger in The Atlantic, - How the Enlightenment Ends: “Philosophically, intellectually - in every way - human society is unprepared for the rise of artificial intelligence.”
The central thesis of Kissinger’s article is that “Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion…”
“The Age of Reason originated the thoughts and actions that shaped the contemporary world order. But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.”
Does the AI revolution presage a New Enlightenment or a New Dark Age, asked Friedman. “We don’t know,” replied Kissinger. We don’t understand how to relate the many choices offered to us by AI to human criteria like ethics, or even to define what those criteria are.
“The internet age in which we already live prefigures some of the questions and issues that AI will only make more acute,…” wrote Kissinger in the article. “Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning… as a rule, they demand information relevant to their immediate practical needs… Truth becomes relative. Information threatens to overwhelm wisdom… Inundated via social media with the opinions of multitudes, users are diverted from introspection…”
“The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision. The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection.”
AI takes these concerns to a whole different level. Up to now, we’ve applied technologies to automate processes within human-prescribed systems and objectives. AI, in contrast, is able to prescribe its own objectives. “AI systems, through their very operations, are in constant flux as they acquire and instantly analyze new data, then seek to improve themselves on the basis of that analysis. Through this process, artificial intelligence develops an ability previously thought to be reserved for human beings. It makes strategic judgments about the future.”
Kissinger feels that “the impact of AI will be of historic consequence.” Its applications are increasingly capable of coming up with results that are totally unexpected and radically different from the way humans solve problems.
“Artificial intelligence will in time bring extraordinary benefits to medical science, clean-energy provision, environmental issues, and many other areas,” wrote Kissinger in the Atlantic article. “But precisely because AI makes judgments regarding an evolving, as-yet-undetermined future, uncertainty and ambiguity are inherent in its results.” His article lists three key areas of concern:
AI applications may achieve unintended results. How can we ensure that our increasingly complex AI systems do what we want them to do? Science fiction is full of scenarios of AI turning on its creators, e.g., Hal in 2001: A Space Odyssey. But, beyond science fiction, there are other major ways where things might not work as expected.
We’re all familiar with software bugs, especially bugs in highly complex software, which is the case with AI systems. The growing complexity of AI systems and their enlistment in high-stakes roles, - like controlling airplanes, cars, surgical robots and health care systems, - means that we must redouble our efforts in testing and evaluating the quality of such AI systems.
Beyond software bugs, AI systems may have problems of their own, especially if developed using machine learning algorithms and trained with large data sets. There may be additional flaws in the algorithms themselves. Or the training data may include unforeseen biases. The systems may well be working as designed, but not as we actually want them to work. It may well take us a while to figure out whether the problem lies with the underlying software, the machine learning algorithms, the training data, or some combination of the above.
The AI system may be unable to explain the rationale for its conclusions. Even if the system is working correctly and achieves its intended goals, it may be unable to explain how it did so in terms that humans will understand. Explaining to a human the reasoning behind a particular decision or recommendation made by a machine learning algorithm is quite difficult, because its methods, - subtle adjustments to the numerical weights that interconnect its huge number of artificial neurons, - are so different from those used by humans.
In achieving its intended goals, AI may change human thought processes and human values. In general, humans solve complex problems by developing an explicit or conceptual model of the problem. Such models provide the context for arriving at a solution or making a decision. AI, on the other hand, learns mathematically, by marginally adjusting its algorithms as it analyzes its training data. This inherent lack of context can lead AI to misinterpret human instructions. It makes it difficult for AI to take into account the kind of subjective, qualitative, caveats like ethical or reasonable that guide human decisions.
Moreover, given that AI learns exponentially faster than humans, its mistakes and deviations are likely to propagate and grow faster than those typically made by humans. An AI system that’s constantly learning by ingesting new data might inevitably develop slight deviations that could, over time, cascade into catastrophic failures. Humans use qualitative attributes like wisdom, judgement and common sense to temper and correct their mistakes, - attributes that quantitavely-based AI systems generally don’t have.
“So to close, Henry, when you come back 10 years from now and give [MIT president] Rafael [Reif] a report card for the School of Computing, what will constitute success for this great new enterprise?,” asked Friedman in conclusion. To which Kissinger replied:
- “I would like to see whether the people who are exploring the next state, the next future have got a better grip than now exists on the nature of the conceptions that artificial intelligence produces.”
- “Then I would like to see whether it had been possible to develop some concepts [for controlling AI-based cyberattacks] that are comparable to the arms control concepts in which I was involved, say, 50 years ago, which were not always successful. But the theory was quite explicable. We don't have that yet.”
- And in all or most of the AI fields being explored, “I would be very interested to see whether the enterprises or the institutions that are fostering them are not just solving the problem that got them interested, but… have made some progress in the implications that will determine our future and the future of the world.”
Comments