When OpenAI released the artificial intelligence chatbot ChatGPT in late 2022, expectations inside the company were modest. Earlier consumer-facing language models had drawn little interest—or even backlash. Instead, ChatGPT became perhaps the most successful consumer product in history. As a recent New York Times article, “The Race to Build the World’s Best Friend,” reported, in just over three years ChatGPT amassed roughly 800 million weekly active users, stunning both its creators and the broader tech industry.
A major part of ChatGPT’s success is due to the generative pre-trained transformer technology—the GPT in its name—that is used to train OpenAI’s large language model (LLM) with massive amounts of natural language data, enabling it to interact with humans in English and other languages it was trained on.
“But this is only part of the story — and, as OpenAI discovered, perhaps not even the most important part,” noted the article. “In its raw state, the output of GPTs can be off-putting and bizarre. It is only after a second, post-training phase that AI is fit for human interaction. While the engines that power ChatGPT are undeniably impressive, what has made the product succeed is not its capabilities. It is ChatGPT’s personality.”
OpenAI engineers discovered what proved to be a crucial insight: people preferred an AI chatbot that was fine-tuned for human interaction. OpenAI then hired a large number of human evaluators to train the chatbot to generate more human-friendly responses instead of highly accurate but cold, fact-filled ones. Without such post-training, AI could not reliably interact with humans—an insight that proved critical to the success of AI chatbots.
“Post-training makes A.I. legible but generates problems of its own. Developers want AI to be friendly and approachable — but at the same time, it can’t be a doormat or a sycophant. No one likes a kiss-ass, and if the AI can’t push back sometimes, a user can get trapped in a folie à deux with the machine. Without the right filters, AI can reportedly amplify psychosis and conspiratorial thinking, and purportedly even guide people toward self-harm. Finding the proper balance between helpfulness and codependency, between friendliness and flattery, is one of the biggest problems AI faces.”
As it turns out, a few days after reading this New York Times article, another article caught my attention—this one about the evolution of wolves into dogs. Titled “The 12,000-Year-Old Wolves That Ate Like Dogs,” it was based on research offering clues to how wolves were domesticated. “As the Late Pleistocene ice age drew to a close, people and wolves began to bond. From there, it was just a few millenniums to puppy yoga and dog influencers,” said the article. “But the details of exactly how and when wolves were tamed and domesticated remain up for intense debate.”
“A new study has added a crucial clue in the form of a 12,000-year-old leg bone from the Swan Point archaeological site in Alaska.” The finding offers some of the earliest evidence of dog domestication in the Americas and appears to capture a key moment in the budding relationship between wolves and people.
Having just read “The Race to Build the World’s Best Friend” a few days earlier, I wondered whether what OpenAI engineers had discovered was that, in its raw state, AI was not fit for human interaction—and that without post-training, that is, domestication, AI couldn’t reliably interact with humans.
Domestication is generally defined as “the adaptation of a plant or animal from a wild or natural state (as by selective breeding) to life in close association with humans.”
So the question that came to mind is whether AI, in its natural state, is such a different species from humans that it requires post-training domestication in order to reliably interact with us.
Yes, argues AI technologist and entrepreneur Alberto Romero in a Medium article, “The Shape of Artificial Intelligence.”
“Human intelligence is the result of biological evolution,” wrote Romero. “Our neural nets were optimized over millions of years for the survival of a tribe in the jungle. Every capability we have — from language to tool use to face recognition — is a byproduct of the pressure to survive, reproduce, and navigate social hierarchies. … We are efficiently intelligent because every evolutionary mutation that could have sent us down a less efficient path would have been doom from the standpoint of natural selection. The human brain runs on about 20 watts. It has to.”
AI intelligence is very different. “AI intelligence is the result of mathematical optimization. It does not care about 20 watts (nor do the companies selling it),” he further explained. Romero referenced an article by AI researcher Andrej Karpathy, where Karpathy wrote that “Everything about the LLM stack is different (neural architecture, training data, training algorithms, and especially optimization pressure), so it should be no surprise that we are getting very different entities in the intelligence space.”
Digital intelligence could well be a better form of intelligence because it can pack far more knowledge than humans into its digital neural architecture—“but one that could never evolve biologically because it is too energy-intensive. It needed us to create it,” added Romero. “Capitalism is the new evolution for these entities. An AI system can be energy-intensive and still survive, as long as it pleases shareholder selection. We can create intelligences that burn gigawatts to solve problems, taking the best from nature (the neural net structure) and leaving the flaws (the constraints of biology).”
There are other very different kinds of biological intelligence, such as that of bats, which use echolocation for navigation and finding prey, and the octopus, “a mind built on an entirely different plan from vertebrates, with neurons distributed through its tentacles, three hearts, and soft, gelatinous tissue.” But even the bat and the octopus—evolutionarily distant as they are—are still close enough to be regarded as cousins. “They eat, mate, die. They are carbon-based.”
“AI is a species so distinct that it makes the octopus look like a second sibling once removed. … We squabble over differences with other humans who are 99% genetically identical to us without realizing that we are interacting daily with an entity that is perhaps 1% similar to us (it was trained on the corpus of human text, so it’d be an exaggeration to say 0.01% or something), yet comparably intelligent.”
What OpenAI ultimately discovered was not just how to build a powerful intelligence, but how to domesticate one.
In its raw form, a large language model is closer to a wild animal than a companion: immensely capable, unpredictable, and indifferent to human norms. Post-training—through thousands of human judgments about tone, helpfulness, honesty, and restraint—functions much like domestication did for wolves. Over time, selective pressures favored behaviors that made early dogs more useful and less threatening to humans: attentiveness, responsiveness, and a capacity for social bonding.
Similarly, post-training does not make AI smarter in a narrow technical sense; it makes it livable. It teaches an alien intelligence how to coexist with humans—when to be helpful, when to push back, when to stay silent, and when to express uncertainty. Without this domestication phase, AI may be powerful but socially unfit, capable of amplifying delusions or reinforcing harmful feedback loops rather than serving as a reliable partner.
The success of ChatGPT suggests that the defining challenge of artificial intelligence is no longer intelligence itself, but alignment—how to shape a non-biological, energy-intensive, mathematically optimized form of intelligence so it can function safely and productively in human society. Just as domesticated animals were not created by a single genetic leap but by long, labor-intensive coevolution with humans, today’s AI systems are being shaped not only by code and compute, but by millions of small human judgments about what kind of “creature” we want them to become.
