I recently listened to a fascinating podcast where NY Times columnist Ezra Klein interviewed Berkeley psychologist Alison Gopney. Professor Gopney is best known for her research in cognitive science, particularly the study of children’s learning and development. She’s written extensively on the developmental phases of the human brain from babies to adults.
Gopnik, a member of the Berkeley AI Research group, has also been exploring the differences between human and machine intelligence, more specifically, what babies can teach us about AI. She’s long argued that babies and young children are smarter than we might think. In some ways, they’re even smarter than adults, let alone way smarter than the most advanced AIs.
What do we mean by intelligence?
In 1994 the Wall Street Journal published the Mainstream Science on Intelligence, an article that included a definition that was agreed to by 52 leading academic researchers in fields associated with intelligence:
“Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings - ‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do.”
This is a very good definition of general intelligence, - the ability to effectively address a wide range of goals in different environments. It’s the kind of intelligence that’s long been measured in IQ tests, and that, for the foreseeable future, only humans have. On the other hand, specialized intelligence, - the ability to effectively address well-defined, specific goals in a given environment, - is the kind of task-oriented intelligence that’s part of many human jobs. Over the past decade, our increasingly capable AI systems have achieved or surpassed human levels of performance in selected applications including image and speech recognition, language translation, skin cancer classification, and breast cancer detection.
Psychologists have further identified two distinct types of human intelligence: fluid and crystallized. Fluid intelligence is the ability to quickly learn new skills, adapt to new environments and solve novel reasoning problems. It requires considerable raw processing power, generally peaks in our 20s and starts diminishing as we get older. Crystallized intelligence is the know-how and expertise which we accumulate over decades. It’s the ability to use our stocks of knowledge and experiences to make wise decisions. It generally increases through our 40s, peaks in our 50s, and does not diminish until late in life.
Babies explore; adults exploit
From an evolutionary biology point of view, children and adults are almost two different creatures. The former have evolved to explore, learn, and change through their long period of childhood, adolescence, and early adulthood, while the latter have evolved to exploit their accumulated knowledge, make plans, find resources, and nurture the young.
“Babies are captivated by the most unexpected events,” wrote Gopnik in a 2009 NY Times article. “Adults focus on objects that will be most useful to them. But … children play with the objects that will teach them the most. … Each kind of intelligence has benefits and drawbacks. Focus and planning get you to your goal more quickly but may also lock in what you already know, closing you off to alternative possibilities. We need both blue-sky speculation and hard-nosed planning.”
“Part of the explanation for these differing approaches can be found in the brain. The young brain is remarkably plastic and flexible. Brains work because neurons are connected to one another, allowing them to communicate. Baby brains have many more neural connections than adult brains. But they are much less efficient. Over time, we prune away the connections we don’t use, and the remaining ones become faster and more automatic. Moreover, the prefrontal cortex, the part of the brain that controls the directed, planned, focused kind of intelligence, is exceptionally late to mature, and may not take its final shape until our early 20s.”
Turing’s Learning Machine
It’s kind of striking, said Gopnik in the NY Times podcast, “that the very best state-of-the-art systems that we have that are great at playing Go and playing chess and maybe even driving in some circumstances, are terrible at doing the kinds of things that every two-year-old can do… maybe we could look at some of the things that the two-year-olds do when they’re learning and see if that makes a difference to what the AI’s are doing when they’re learning.”
Decades ago, Alan Turing had a similar idea. In his 1950 seminal paper, Computing Machinery and Intelligence, Turing proposed what’s famously known as the Turing test, - a test of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. If a human at a keyboard couldn’t tell whether he or she was interacting with a machine or a human, the machine was considered to have passed the Turing test.
“Almost no one remembers that in the very same paper Turing suggested that the key to achieving intelligence would be to design a machine that was like a child, not an adult,” wrote Gopnik in a 2015 article, What Babies Tell Us About AI. Turing presciently suggested that the key to human intelligence was our ability to learn. We should therefore design a learning machine that simulates how a child’s mind learns, including its initial state at birth, the education it has received over the years, and any other experiences that have shaped its mind into that of an adult. “For the last 15 years or so computer scientists and developmental cognitive scientists have been trying to figure out how children learn so much so quickly, and how to design a machine that could do the same.”
The Ultimate Learning Machines
“The history of AI is fascinating because it’s been so hard to predict which aspects of human thought would be easy to simulate and which would be difficult. At first, we thought that things like playing chess or proving theorems - the corridas of nerd machismo - would prove to be hardest for computers. In fact, they turn out to be easy. Things every fool can do like recognizing a cup or picking it up turn out to be much harder. And it turns out to be much easier to simulate the reasoning of a highly trained adult expert than to mimic the ordinary learning of every baby.”
“So where are machines catching up to children and what kinds of learning are still way beyond their reach?” Gopnik explored this question in a 2019 WSJ essay, The Ultimate Learning Machines.
Data. Machine learning is data hungry. Deep learning methods are particularly valuable in extracting patterns from complex, unstructured data, including audio, speech, images and video. To do so, they require millions and millions of data records for them to perform at the level of humans.
“Children, on the other hand, can learn new categories from just a small number of examples. A few storybook pictures can teach them not only about cats and dogs but jaguars and rhinos and unicorns. The kind of data that children learn from is also very different from the data AI needs. The pictures that feed the AI algorithms have been curated by people, so they generally provide good examples and clear categories.”
Supervision. The deep in deep learning refers to its highly sophisticated, multi-layered statistical properties. But, while capable of some amazing results, in its present incarnations deep learning is actually quite shallow. Our present AI applications do just one thing quite well, but in order to train them, each image must be given a concrete label and each action a concrete score.
Baby’s learning, by contrast, is largely unsupervised. “Parents may occasionally tell a baby the name of the animal they’re seeing or say ‘good job’ when they perform a specific task. … Most of a baby’s learning is spontaneous and self-motivated.”
“Even with a lot of supervised data, AIs can’t make the same kinds of generalizations that human children can. Their knowledge is much narrower and more limited, and they are easily fooled. … “Current AIs are like children with super-helicopter-tiger moms’ programs that hover over the learner dictating whether it is right or wrong at every step. Not unlike human children, those helicoptered AI children can be very good at learning to do specific things well, but they fall apart when it comes to resilience and creativity.”
Common sense. “One of the secrets of children’s learning is that they construct models or theories of the world. Toddlers may not learn how to play chess, but they develop common-sense ideas about physics … even 1-year-old babies know a lot about objects: They are surprised if they see a toy car hover in midair or pass through a wall, even if they’ve never seen the car or the wall before.”
One of the grand challenges in AI is to design a system that understands how the world works as well as an 18-month old. AI does best with applications and test sets that closely resemble those used in the training set, but it does much less well when attempting to generalize or extrapolate beyond its training data sets.
Curiosity. Children are notoriously curious and active experimenters. Parents are constantly trying to keep their children out of trouble because they like getting into everything. Babies are constantly exploring how the world works by picking things up and dropping them, putting them together and taking them apart.
A cutting-edge research project in Berkeley’s AI Lab is trying to develop AIs that are similarly curious active learners. “Usually, machine learning systems reward the AI when they do something right, like bumping up their score in a game. But these AIs get a reward when they do something that leads to a surprising or unexpected result, and this makes them explore weird events, just like the babies. In fact, AIs that are motivated by curiosity are more robust and resilient learners than those that are just motivated by immediate rewards.”
Social learning. “A final crucial factor that sets children apart from AIs is the way that they learn socially, from other people. Culture is our nature, and it makes our learning particularly powerful. Each new generation of children can take advantage of everything that earlier generations have discovered.”
AIs also learn from people, but they do so in a relatively simple and thoughtless way. Machine learning algorithms can only recognize images because millions of people have simplified and labelled the images. And they learn to translate from one language to another by being trained with enormous data bases of human translations.
“Is it possible for physical systems to solve all of these problems?,” asks Gopnik in conclusion. “In some sense, it must be, because those physical systems already exist: They’re called babies. … But we are still very far from approaching that level of intelligence in machines. That’s OK, because we don’t really want AIs to replicate human intelligence; what we want is an AI that can help make us even smarter. To create more helpful machines, like curious AIs or imitative robots, the best way forward is to take our cues from babies.”
Excellent info
Thanks
Josi
Posted by: Josi Konski | May 04, 2021 at 03:47 PM