“Even as some instructors remain fervently opposed to chatbots, other writing and English professors are trying to improve them,” observed a recent New York Times article, “AI Is Coming to Class.” At the heart of the article is a debate now unfolding across higher education: whether— and how — university students should be taught to properly use generative AI.
The article illustrates this debate through the first-year writing program at Barnard College, which generally bans generative AI tools such as ChatGPT, Claude, and Gemini — systems that can readily draft paragraphs, conduct research, and compose essays. The program’s policy warns students that AI tools are “often factually wrong” and “deeply problematic,” perpetuating misogyny as well as racial and cultural biases.
Yet the program has made an exception for Benjamin Breyer, a senior lecturer in Barnard’s English Department, who is determined to see whether AI can supplement, rather than short-circuit, students’ efforts to learn academic writing. In doing so, Breyer represents a growing group of faculty who are experimenting with how AI might be used constructively — even as many of their colleagues remain firmly opposed.
I am particularly interested in this debate because it strongly echoes my own early experiences with computers as a physics student in the 1960s — a time when the legitimacy of using machines as intellectual tools was also very much in question. Let me explain.
The key event that launched my six-plus-decade involvement with computers took place in the summer of 1962, just before I entered college at the University of Chicago. Planning to major in math and physics, I was looking for a summer job in the university’s research labs. When I couldn’t find one, I resigned myself to spending the hot Chicago summer working in the library stacks.
But as fate would have it, I learned that a new computation center was being established at the university. I went over and met its director, physics professor Clemens Roothaan, one of the pioneers in the use of computers in physics and chemistry research. Even though I knew nothing about computers — few high school graduates did in 1962 — I ended up getting a summer job in the brand-new computation center, complete with an air-conditioned office.
I quickly discovered that I enjoyed learning how to program, initially in assembly language and later in Fortran. I continued working part-time at the computation center throughout my college years, then went on to earn a Ph.D. in physics at the University of Chicago, with Professor Roothaan as my thesis advisor. As I was finishing my degree, I realized that I enjoyed the computing side of my work more than the physics itself, and in 1970 I joined the computer sciences department at IBM’s Thomas J. Watson Research Center.
In the 1960s, the use of computers for scientific research was still relatively new. Some older physics professors looked askance at the growing use of computers, feeling that this wasn’t “real physics” — that is, the kind of theoretical physics they had grown up with over the past few decades. This reaction strongly echoes today’s discussions about the proper use of Gen AI-based tools in higher education.
“Legacy academic organizations are evolving in their approaches,” noted the New York Times article, citing an October 2024 working paper by the Modern Language Association (MLA), “Building a Culture for Generative AI Literacy in College Language, Literature, and Writing.”
“An almost universal graduation requirement, first-year writing courses are intended to provide students with literacy skills and strategies they will draw from throughout their college careers,” the MLA paper states. “For this reason, such courses have a special responsibility to teach students how to use GAI critically and effectively in academic situations and across their literate lives.”
The paper goes on to note that “GAI is no longer a stand-alone technology in the sense of its being ‘out there’ (as a website or app) that individuals may choose either to use or not use. GAI is increasingly being embedded and integrated directly into everyday services, applications, and devices, from internet search to email and word processing.” As a result, “there are no clear dividing lines between what is and isn’t a GAI tool or technology,” making a focus on GAI literacy urgent.
At Barnard, Professor Breyer is keenly aware of his colleagues’ misgivings about AI. But based on his classroom experiments, he has become a source of reassurance that faculty will continue to play a central role in this transition.
“This is no threat to us at present,” he tells them. “A.I. may help with the expression of an idea and articulating that expression. But the idea itself —the thing that’s hardest to teach — is still going to remain our domain.”
To better leverage generative AI in his writing class, Breyer developed his own chatbot, with the help of a Barnard software developer, to act as an interactive workbook. He gave the chatbot a persona and named it Althea after a Grateful Dead song, — describing it as “a tutor at an elite liberal arts college.” Althea assists students with editing their writing and developing a concise thesis statement that summarizes their central argument. Crucially, the chatbot is designed to issue a “hard refusal” if a student asks it to write something for them.
Breyer then tested Althea’s effectiveness by comparing outcomes between students who used the tool and those in another section of his writing course where AI was entirely prohibited. After a year, he found that students in the non-AI section initially performed better on writing exercises than the students in the section that used Althea, largely because Althea was not able to improve the quality of their ideas. Rather than abandoning the project, however, Breyer refined the chatbot, training it to ask more focused and probing questions, and reintroduced it the following semester.
“The improvements to Althea helped,” he said. “For the first time, students this past semester who used the bot did better on the exercises than those who didn’t.” Perhaps more importantly, the refinement process proved “reciprocal,” helping Breyer himself learn to ask better, more targeted questions of his students as he improved the tool.
Let me conclude with a personal example from my own recent experience — one that illustrates what I believe is a legitimate and increasingly important use of AI as a cognitive tool in the classroom and beyond.
For more than 20 years, I have written a weekly blog. Writing and editing each post takes many hours — not only drafting and revising, but thinking through the subject and framing the argument. Over time, blogging has become a way to explore new ideas and and keep up with advances in technology.
From 2012 to 2020, edited versions of my posts were republished in the WSJ CIO Journal. I often preferred the edited versions, not surprisingly since the editing was done by skilled CIO Journal journalists. I’ve missed having such editing assistance. A few months ago, I wondered whether an AI chatbot could play a similar supporting role helping me improve the clarity and flow of my blogs.
So, a few months ago I logged on to ChatGPT and asked the chatbot to help me lightly edit my blogs while preserving my style and voice. I pasted the draft of the blog, and within seconds, the chatbot returned a lightly edited draft, explicitly stating that it had focused on clarity, consistency, and grammar without changing my analytical arc or voice. I was frankly struck by how effective the edits were. I accepted many of them, rejected others, and in the process remained fully responsible for the ideas, arguments, and final judgment.
This, to me, is the central lesson for higher education. AI did not replace the long, hard work it takes me to write the blogs, nor did it generate ideas on my behalf. Instead, it functioned much like earlier computational tools did for scientists decades ago: amplifying effort, sharpening expression, and freeing time for deeper intellectual work. The challenge for universities is not whether students will use AI — they already are — but whether they will be taught to use it well, critically, and responsibly, — as an aid rather than a substitute to their own thinking.
