“Even in a world with AI superintelligence, one thing will be true: we will always have the responsibility to make tough decisions,” wrote Bharat Chandar, — a postdoctoral researcher at the Stanford Digital Economy Lab, — in “AI can’t make your toughest decisions.” “Nearly all consequential choices in life depend on a mix of two things: intelligence and values. AI can improve our intelligence, but it does not offer respite from sorting out our values when making decisions. We should design for AI alignment accordingly.”
In his essay, Chandar noted that:
- All important choices depend on both intelligence and values.
- Intelligence is the capacity to grasp the future consequences of our actions.
- Values, — what you like and dislike, things you think are right and wrong, — are not principally about intelligence, so it doesn’t make sense to delegate them to a computer just because it may be “smarter” than us.
- This means that we should make our own choices, not just outsource decision-making to an AI.
- AI alignment should be designed to enable us to do that.
Even in a world with AI superintelligence, we will always have the responsibility to make tough decisions. And, making such tough decisions requires critical thinking skills.
Over the past few years, the need to align machine and human intelligence has been a common theme in AI research. For example, in a series of articles on the economic value of AI, university of Toronto professors Ajay Agrawal, Joshua Gans, and Avi Goldfarb explained that decisions typically involves two main activities: predictions and judgement.
Whereas predictions are generally based on concrete information and technology, judgement is based on subjective factors like intuition, unconscious feelings, or analogies with somewhat similar situations from our past. Judgement is the part of decision-making that, unlike prediction, cannot be explicitly described to and performed by a machine.
“Judgment is the ability to make considered decisions — to understand the impact different actions will have on outcomes in light of predictions,” wrote the authors in an MIT Sloan Management Review article. “Tasks where the desired outcome can be easily described and there is limited need for human judgment are generally easier to automate. For other tasks, describing a precise outcome can be more difficult, particularly when the desired outcome resides in the minds of humans and cannot be translated into something a machine can understand.”
In “The EPOCH of AI: Human-Machine Complementarities at Work,” MIT’s Isabella Loaiza and Roberto Rigobon wrote that instead of asking whether machines are going to automate most jobs, we should shift the focus from the machines to the humans by asking different questions, such as “how can humans and machines complement each other,” and “what human capabilities complement AI shortcomings?”
Based on a series of interviews with a wide range of experts, Loaiza and Rigobon identified five groups of capabilities that enable humans to do work in areas where machines are limited: Empathy and Emotional Intelligence; Presence, Networking, and Connectedness; Opinion, Judgment, and Ethics; Creativity and Imagination; and Hope, Vision and Leadership, which make up the acronym EPOCH in the article's title.
In a second essay, “Will AI create a generation of non-thinkers,” Chandar wrote about his concerns that a generation of students may not develop the critical skills necessary to think for themselves because they’ve been increasingly relying on AI for their coursework and other learning assignments.
“Recall staring blankly at a page, struggling to come up with an answer to an essay prompt. Formulating and articulating a thought might have taken hours, each sentence revised over and over. Working through writer’s block to craft a compelling argument was a painstaking rite of passage towards becoming an effective thinker and communicator.”
“Do students today have this experience?,” Chandar asked. “If AI can write our essays, what happens to human thought?” He referenced a 2024 survey of the state of AI in the classroom by Common Sense Media, an educational nonprofit. Based on responses from almost 1,050 parents or guardians of 13 to 18 years old teenagers, the survey showed that 70% of these teens have used at least one type of generative AI tool for a variety of purposes. 40% of teens reported using gen AI specifically for schoolwork, with 41% doing so with their teacher’s permission, 46% without permission, and 12% unsure whether they had their teacher’s permission.
The survey raises troublesome questions, said Chandar. If genAI is being used by a high percentage of students to do their homework, they might not learn critical skills because they are counting on genAI to do the work for them.
Similar concerns were also raised in a recent article in The Economist, “Will AI make you stupid?”
The article starts by describing a recent MIT study, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” The study found that while relying on AI for help in writing an essay would certainly lighten the students’ mental load, that help would come at a cost. “Over the course of a series of essay-writing sessions, students working with (as well as without) ChatGPT were hooked up to electroencephalograms (EEGS) to measure their brain activity as they toiled. Across the board, the AI users exhibited markedly lower neural activity in parts of the brain associated with creative functions and attention. Students who wrote with the chatbot’s help also found it much harder to provide an accurate quote from the paper that they had just produced.”
“The findings are part of a growing body of work on the potentially detrimental effects of AI use for creativity and learning,” said The Economist. “This research points to important questions about whether the impressive short-term gains afforded by generative AI may incur a hidden long-term debt.” These findings are similar to those of other recent studies on the relationship between AI and critical thinking, such as one from Microsoft Research, and another from Swiss Business School professor Michael Gerlich. Researchers have stressed that these are early findings and much more work is needed to understand if there is a definitive link between elevated AI use and lower critical thinking skills.
In his essay on the impact of AI on education, Chandar referenced another article in The Economist, “How AI will divide the best from the rest,” which raised a very important question: could AI end up widening social divides?
When generative AI first became popular a few years ago, there was an expectation that the use of AI would level the playing field in a number of occupartions. For example, a 2023 study by economists Erik Brynjolfsson, Danielle Li, and Lindsey Raymond found that access to AI-generated recommendations increased the productivity of lower skilled, less experienced customer service agents by around 30%, while having little impact on the productivity of higher-skilled, more experienced workers. And, in a 2024 article, “AI Could Actually Help Rebuild the Middle Class,” MIT economist David Autor argued that AI offers us the opportunity to extend the value of human expertise to a larger set of workers that have the necessary foundational training to perform these high-level tasks.
“More recent findings have cast doubt on this vision, however,” said The Economist. “They instead suggest a future in which high-flyers fly still higher — and the rest are left behind. In complex tasks such as research and management, new evidence indicates that high performers are best positioned to work with AI. Evaluating the output of models requires expertise and good judgment. Rather than narrowing disparities, AI is likely to widen workforce divides, much like past technological revolutions.”
“Although early studies suggested that lower performers could benefit simply by copying AI outputs, newer studies look at more complex tasks, such as scientific research, running a business and investing money. In these contexts, high performers benefit far more than their lower-performing peers. In some cases, less productive workers see no improvement, or even lose ground.”
“What should we do about a potential crisis of thinking?,” asked Bharat Chandar in his essay on the impact of AI on education. “It is still early to determine what, if anything, should be done to support critical thinking with the adoption of AI. This is in part because the space is moving so quickly, and in part because of a paucity of high quality experimental research studying the consequences of generative AI on education choices. Data to monitor include long-term trends in performance on standardized test scores, which for now remain in flux due to the aftermath of the pandemic. If the evidence becomes concerning, then we should develop systems to ensure that future generations have the tools they need to think critically.”
