On July 13 I participated in an online panel on Board Governance: Risks and Opportunities in a Complex World, sponsored by eCornell, Cornell University’s executive education unit. The panel was organized and moderated by LizAnn Eisen, acting professor of the practice at the Cornell Law School, and former corporate partner at Cravath.
In addition to me, panel members included Bill Priestap, former assistant director of the FBI, head of the FBI Counterintelligence Division and now founder of the risk consulting startup Trenchcoat Advisors; Noah Phillips, former commissioner at the FTC and now anti-trust partner at Cravath; and David Kappos, former director of the USPTO and now IP partner at Cravath.
The panelist discussed a range of topics that board members should pay attention to when considering risks and opportunities. Each of us foced on our respective areas of expertise. I discussed the potential business impact of artificial intelligence; Phillips focused on regulatory issues that are arising as AI is used in an increasing number of business applications, like the need for consumer protection; Kappos discussed the extensive IP and legal implications posed by AI; and Priestap focused on the challenges posed by an increasingly dangerous global geopolitical environment.
A recording of our panel can be found here.
About a week before the panel, Eisen sent me the two major questions that she wanted me to address. Let me share the remarks that I prepared in answer to her questions.
What should board members be asking their senior leaders about AI deployment?
AI has been generating huge attention. For the past few months, I’ve been trying to sort out what is hype and what is real about Large Language Models, ChatGPT and related recent AI breakthroughs. There’s no question that AI has emerged as the most important transformative technology of at least the first half of the 21st century. A key question in my mind is whether AI is likely to follow an evolutionary path similar to those of previous historically transformative technologies like electricity, computers, and most recently the internet, or whether it's in a class by itself because, unlike previous technologies, it deals with human-like intelligence and could even pose a threat to humans.
Over the past two centuries we’ve learned that historically transformative technologies have great potential from the outset, but realizing that potential requires major complementary investments including business process redesign; innovative new products; and the re-skilling of the workforce; as well as fundamentally rethinking of organizations, industries, economies, and societal institutions.
I’ve concluded that AI should be treated by company senior executives and board members like other major technologies such as the internet. When dealing with AI projects, they should focus on how to best leverage AI for business value.
Boards should ask questions about the specific AI project a company wants to pursue, especially the business case, whether the company has the right skills, key partners, etc. They should ask questions about potential problems, — not whether AI poses an existential risk to humanity, — but major legal issues like copyright infringement, regulatory issues like those being considered by the EU, and potential cybersecurity issues, all of which Dave, Noah, and Bill can best comment on.
What are the key near term business opportunities and risks for AI?
A recent McKinsey study on the economic potential of generative AI identified four major near term business areas of opportunity: customer operations, marketing and sales, software engineering, and research and development. These all feel right to me. But, I’m much less sure about the consumer applications envisioned for generative AI and chatbots, such as AI assistants, mentors, tutors, coaches, advisors, therapists. Human tasks and behaviors are much more complicated and less understood than business processes, which are much better understood than human tasks and behaviors. Those are likely to take longer to develop.
ChatGPT was released on November 30, 2022, and within two months it had been accessed by over 100 million users, propelling AI into a whole new level of expectations, which has been accompanied by an AI gold rush that's been attracting lots of attention from startups and investors.
All previous historically transformative technologies have been accompanied by a similar gold rush, — remember the 1990s internet dot-com bubble, — and in all previous cases the bubble eventually burst. While it's too early to tell, I believe that something similar is likely to happen with the current AI gold rush, something boards should be very watchful for as companies contemplate major investments and acquisitions.
After a few follow-up questions, Eisen discussed the geopolitical, regulatory, and legal environments with Priestap, Phillips, and Kappos respectively, followed by a discussion with the four members of the panel.
All four panelists brought up the increasing importance of data as the biggest change brought about by AI over the past two decades. I mentioned that I’ve seen an increasing number of articles reminding us of an expression widely used since the early days of computing — garbage in, garbage out. Given the central role of data in AI, its very important to systematically engineer the data used in training AI systems.
Until about 2000, the notion of carefully engineering the components of a computer systems primarily applied to the development of hardware and software, but data itself played a significantly smaller role in the actual development of the system. This started to change in the 2000s with the advent of big data, followed by machine and deep learning in the 2010s and now generative AI and LLMs. AI is truly a data-driven discipline.
The increasing importance of data in the training and development of an AI systems is giving rise to a number of important business decisions, such as the use of open, generally available versus proprietary data and the need for partnerships and acquisitions to get access to the necessary data. All that data needs to be carefully managed and protected against the increasing number of data theft incidents. Given the strategic nature of these decisions, it’s important that they should be carefully reviewed at the board level.
A recent article in The Economist, “What are the chances of an AI Apocalypse,” wrote about the potential existential risk posed by AI. The article was based on a study by economist Ezra Karger and political scientist Philip Tetlock that compared the predictions of superforecasters, — “general-purpose prognosticators with a record of making accurate predictions on all sorts of topics, from election results to the outbreak of wars,” — with the predictions of domain experts in a number of existential risk domains including AI, nuclear war and pandemics.
“The most striking conclusion of the study was that the domain experts, who tend to dominate public conversations around existential risks, appear to be gloomier about the future than the superforecasters,” said The Economist. “The median superforecaster reckoned there was a 2.1% chance of an AI-caused catastrophe, and a 0.38% chance of an AI-caused extinction, by the end of the century. AI experts, by contrast, assigned the two events a 12% and 3% chance, respectively.” Similarly, superforcasters estimated that there was a roughly 3.5% chance of a nuclear catastrophe and under a 0.1% chance of a nuclear extinction event by the year 2100, while the experts assigned the two events an 8% and 0.5% chance respectively.
After listening to my fellow panel members, I became even more convinced that fears that a super-intelligent AI could pose an existential threat to humanity should not be our main AI concern at this time and possibly ever. AI concerns should be focused on the regulatory, legal, and geopolitical arenas, where we’re already seeing several serious AI issues.
For example, a recent front page NY Times article wrote that an increasing number of writers, artists, actors, social media companies, and news organization are fed up with AI companies consuming their online content without their consent. “At least 10 lawsuits have been filed this year against A.I. companies, accusing them of training their systems on artists’ creative work without consent,” said the article. We’re also starting to see lawsuits against companies whose chatbots or search engines generate allegedly AI-hallucinated libelous statements .
AI has been a core issue in the Hollywood actors strike. The actors have expressed grave concerns that AI and computer-generated imagery (CGI) could be used to replicate and modify their performances using their previous work but without being compensated or consulted.
The Federal Trade Commission (FTC) recently announced plans to investigate ChatGPT maker OpenAI for providing false information in chat results. Similarly, SEC chair Gary Gensler announced that he’s considering new rules to oversee the impact of AI on financial markets.
Let me reiterate that after decades of promise and hype, AI has finally emerged as the defining technology of our era. The aforementioned McKinsey study on the economic potential of AI estimated that over time, AI could add between $17 trillion and $25 trillion to the yearly global economy. We can also expect a significant increase in productivity growth, the creation of new industries and jobs, and the expansion of scientific and health care breakthroughs.
Finally, let me point out that eCornell is organizing a Board Governance Summit that will take place on November 3 and 4 at the Cornell Tech campus in New York City’s Roosevelt Island.
Comments