On November 17, OpenAI's board of directors ousted chief executive and co-founder Sam Altman, saying in a statement that “The board no longer has confidence in his ability to continue leading OpenAI.” Then on November 22, after an intense pressure campaign from employees and investors, OpenAI announced that it had reached “an agreement in principle for Sam to return to OpenAI as CEO” with a reconstituted board.
What led to this chaotic governance drama over approximately 107 hours from November 17 to 22?
“Many of the people at the company seem simultaneously motivated by the scientist’s desire to discover, the capitalist’s desire to ship product and the do-gooder’s desire to do this all safely,” wrote NY Times columnist David Brooks In “The Fight for the Soul of A.I.,” where he discussed the conflicting issues that led to OpenAI’s chaotic 4.5 days. “Is this fruitful contradiction sustainable? Can one organization, or one person, maintain the brain of a scientist, the drive of a capitalist and the cautious heart of a regulatory agency?”
Brooks reminded us that artificial intelligence comes from a long research lineage. AI came to light in the mid-1950s as a promising new academic discipline that aimed to develop intelligent machines capable of handling human-like tasks like natural language and playing chess. AI became one of the most exciting areas in computer sciences in the 1960s, ’70s, and ’80s, but after years of unfulfilled promises and hype, a so called AI winter of reduced interest and funding that nearly killed the field set in everywhere.
AI was successfully reborn in the 1990s with a totally different paradigm based on analyzing large amounts of data with sophisticated algorithms and powerful computers. Over the past decade or so, the necessary ingredients have come together to propel AI beyond universities and research labs into the broader marketplace: powerful, inexpensive computer technologies; advanced algorithms, models, and systems; and huge amounts of all kinds of internet-based digital data. Finally, after decades of promise and hype, artificial intelligence has now become the defining technology of our era.
In the last few years, academic researchers have been leaving universities for industry. “Even today, many of the giants of the field are primarily researchers, not entrepreneurs,” noted Brooks, citing 2018 Turing Award winners NYU professor Yann LeCun, who since 2013 has also been chief AI scientists at Meta; and University of Toronto professor Geoffrey Hinton, who was a research scientist at Google from 2013 to 2023.
But, while AI continues to be more academic that other parts of the technology world, “the field also has the intensity and the audacity of the hottest of all startup sectors,” said Brooks. “The researchers kept telling me that this phase of A.I.’s history is so exhilarating precisely because nobody can predict what will happen next. … The people in A.I. seem to be experiencing radically different brain states all at once. I’ve found it incredibly hard to write about A.I. because it is literally unknowable whether this technology is leading us to heaven or hell, and so my attitude about it shifts with my mood.”
Brooks wrote that after visiting OpenAI’s San Francisco headquarters earlier this year, he found the culture quite impressive. Many of the people he interviewed were not primarily driven by money, but by developing leading edge, safe, and ethical AI systems. “As impressive as they all were, I remember telling myself: This isn’t going to last. I thought there was too much money floating around. These people may be earnest researchers, but whether they know it or not, they are still in a race to put out products, generate revenue and be first.”
“The fight over OpenAI was at least partly about dueling visions of artificial intelligence,” wrote NY Times technology columnist Kevin Roose in “A.I. Belongs to the Capitalists Now,” his own explanation of the OpenAI governance drama.
“In one vision, A.I. is a transformative new tool, the latest in a line of world-changing innovations that includes the steam engine, electricity and the personal computer, and that, if put to the right uses, could usher in a new era of prosperity and make gobs of money for the businesses that harness its potential. In another vision, A.I. is something closer to an alien life form — a leviathan being summoned from the mathematical depths of neural networks — that must be restrained and deployed with extreme caution in order to prevent it from taking over and killing us all.”
Team Capitalism, clearly won out. “Powerful A.I. is no longer just a thought experiment — it exists inside real products, like ChatGPT, that are used by millions of people every day,” added Roose. “The world’s biggest tech companies are racing to build even more powerful systems. And billions of dollars are being spent to build and deploy A.I. inside businesses, with the hope of reducing labor costs and increasing productivity.”
As Brooks had predicted a few months earlier, “there was too much money floating around.”
Is the struggle over the visions of researchers and capitalists unique to AI? Have we seen similar such struggles with previous transformative, world-changing innovations? We don’t have to look very far to find to find a similar struggle in the evolution of the internet and World Wide Web, the defining technologies of the past three decades.
Lest we forget, the internet started out in 1969 as ARPANET, a research project sponsored by the US Advanced Research Projects Agency (DARPA) to develop a universal, resilient, digital network that would enable communications among computers following a nuclear attack. By the mid-1980s, the internet had evolved into NSFNET, a network widely used in academic and research communities.
Then in 1989, a research group led by Tim Berners-Lee at CERN, the high energy physics lab in Switzerland, developed the World Wide Web as a global information system accessible form every node on the internet. Another research innovation followed in 1992: the development of the graphical, easy-to-use Mosaic browser by Marc Andreessen at the University of Illinois National Center for Supercomputing Applications.
The stage was now set for the internet and Web to begin their transformation from research technologies primarily used by academic and research communities, to the commercial platforms they’ve since become. The number of internet users around the world is now estimated to have reached around 5.3 billion, — nearly two thirds of the world’s population.
The internet of the early 1990s, which some refer to as Web1, aimed to develop a decentralized computer network where everyone did their best to follow the same protocols and no one needed to be in charge. This open, standards-based internet truly transformed the IT industry, starting with replacing all previous proprietary networks, like IBM’s System Network Architecture (SNA). It was hoped that such a universally available internet would help eradicate nationalism and reduce inequality by bringing the world together as a connected global village.
But, by the mid-1990s, the huge economic potential of the internet and web started to become apparent. Netscape was founded by investors in 1994, recruiting Andreessen to develop a commercial version of the Mosaic browser, as well as web servers, and other internet software. Netscape’s highly successful IPO only 16 months later caught the attention of the capitalist world, including VCs, investors, entrepreneurs, startups, and established companies.
By the mid 2000s, the internet evolved into its Web2 phase, giving users the ability to create and publish their own content in personal websites, blogs, and social media platforms, which over the years became dominated and monetized by a relatively small number of large companies. The past few years have seen the emergence of the Web3 movement, whose proponents aim to restore the objectives of the original Web1 internet by replacing corporate mega-platforms with with a large number of smaller, distributed platforms that will usher a more open, entrepreneurial internet and a middleman-free digital economy. Time will tell how the internet will evolve.
Similarly, time will tell how the different visions for the future of AI are likely to play out.
“A.I. is a field that has brilliant people painting wildly diverging but also persuasive portraits of where this is going,” wrote Books. “The venture capital investor Marc Andreessen emphasizes that it is going to change the world vastly for the better. The cognitive scientist Gary Marcus depicts an equally persuasive scenario about how all this could go wrong. Nobody really knows who is right, but the researchers just keep plowing ahead.”
“And perhaps what happened at OpenAI — a triumph of corporate interests over worries about the future — was inevitable, given A.I.’s increasing importance,” said Roose in conclusion. “A technology potentially capable of ushering in a Fourth Industrial Revolution was unlikely to be governed over the long term by those who wanted to slow it down — not when so much money was at stake.”
Comments