Several weeks ago Vanity Fair published and article by NY Times columnist Maureen Dowd, - Elon Musk’s Billion-Dollar Crusade to Stop the AI Apocalypse. “Elon Musk is famous for his futuristic gambles, but Silicon Valley’s latest rush to embrace artificial intelligence scares him,” noted Dowd. “And he thinks you should be frightened too.”
Entrepreneur and inventor Elon Musk is one of a number of world-renowned technologists and scientists who have expressed serious concerns that AI might be an existential threat to humanity, a group that includes Stephen Hawking, Ray Kurzweil and Bill Gates. But, the vast majority of AI experts do not share their fears. A few months ago, Stanford University’s One Hundred Year Study of AI project published a report by a panel of experts assessing the current state of AI. Their overriding finding was that:
“Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly.”
Just because experts conclude that, - at least for the foreseeable future, - AI does not pose an imminent threat to humanity, doesn’t mean that such a powerful technology isn’t accompanied by serious challenges that require our attention.