Artificial Intelligence Robots: Why Human Baby Brains Are Smarter Than AI
Machines are capable of understanding speech, recognizing faces and driving cars safely, making recent technological advancements seem impressively powerful. But if the field of artificial intelligence is going to make the transformative leap into building human-like machines, it’ll first have to master the way babies learn.
“Relatively recently in AI there’s been a shift from thinking about designing systems that can do the sort of things that adults can do, to realizing if you want to have systems that are as flexible and powerful and do the kinds of things that adults do, you need to have systems that can learn the way babies and children do,” developmental psychologist Alison Gopnik, a researcher at the University of California at Berkeley, told International Business Times. “If you compare what computers can do now to what they could do 10 years ago, they’ve certainly made a lot of progress, but if you compare them to what a four year old can do, there’s still a pretty enormous gap.”
Babies and children construct theories about the world around them using the same approach scientists use to construct scientific theories. They explore and test their environment and the people in it with a systematic and experimental effort that is crucial to learning.
Gopnik recently worked with a team of researchers to demonstrate how children as young as 15 months old can learn cause-effect relationships using statistical data better than older children. Babies and young children may be better learners because their brains are more flexible or “plastic”; they are less tainted by pre-existing knowledge, which allows them to be more open-minded. Brains are not unchanging, but rather modify with every learning experience.
By combining expertise from developmental psychologists and computational scientists, humans may be able to unlock how the brains of the best learners in the world work and translate that computational power into a machine. Currently, AI requires mass amounts of data to extract patterns and conclusions. But babies, who have comparatively little data about the world around them, use a statistical evaluation called Bayesian learning. That is, an interpretation isn't based on the known frequency of an outcome — information that babies don't have — but instead on inferring the probability based on current knowledge, which adjusts continually as new information is received.
“The amazing thing about babies is they can see something once or hear a new word for the first time and they already have a good idea of what that new word could mean and how they could use that new word,” says Gopnik. “So these kind of Bayesian approaches have been good in explaining why children are so good at learning even when they don’t even have much data.”
Babies use the probabilistic model to create a variety of hypotheses by combining probabilities and possibilities to draw conclusions. As the brain matures, it becomes more specialized in order to execute complex functions and as a result, becomes less agile and increasingly difficult to alter over time. Older learners develop biased perspectives as they learn more about the world and strengthen certain neural connections, which hamper their ability to form out-of-the-box hypotheses and abstract theories based on little information. This is where babies and children under the age of five thrive.
“The tradeoff is, the more you know, the harder it is to consider new possibilities,” Gopnik said. “The more you know, the more you tend to rely on what you know and not be open to something new. From an evolutionary perspective, the whole point of babies is that they don’t know as much so they're better as learning something new.”
Every second of a baby's first few years of life, 700 new neural connections are formed, making a flexible brain integral to processing a rapid accumulation of information from environmental and social interactions. Plasticity early in life makes it easier to build the brain’s architecture from the ground up than to rewire the circuitry in adulthood. Bayesian learning has proven to be such a powerful tool in childhood development that computer scientists are now using models to design intelligent learning machines.
“The Bayesian math is trying to capture how babies learn,” computational cognitive scientist Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences at MIT, told IBT. He is currently collaborating with Gopnik to further research in their hybridized field of computers and psychology. “They come into the world prepared with basic building blocks to help them understand some of the most complex concepts. Then they have learning mechanisms that take those initial building blocks and try to make inferences from sparse data and create causal theories.”
The human brain, no matter what stage of development, is designed to take in the physical world through a range of sensory systems: visual, auditory, smell, taste, touch, spatial orientation and balance. When a person is presented with limited data, the brain fills in the blank, a phenomenon in the neural structure known as degeneracy. Babies’ brains are especially adroit at processing information despite lacking one or more senses.
“Children are learning like scientists in order to understand the world,” says Tenenbaum. “ That includes forming theories, doing experiments, playing around and seeing what they might discover, actively thinking of what are the right ways to test out their theories or reacting to something they didn’t expect, and trying to figure out what went wrong and what went right.”
Taking Baby Steps
Tenenbaum and a team of researchers from New York University and the University of Toronto collaborated to design AI software capable of capturing new knowledge in a more efficient and sophisticated way. In December 2015, their findings, published in the journal Science, revealed machine-learning algorithms for creating computers that come close to processing information the way we do.
The new AI program can recognize a handwritten character just about as accurately as a human can after seeing just one example. Using a Bayesian program learning framework, the software is able to generate a unique program for every handwritten character it’s seen at least once before. But it’s when the machine is confronted with an unfamiliar character that the algorithm’s unique capabilities come into play. It switches from searching through its data to find a match, to employing a probabilistic program to test its hypothesis by combining parts and subparts of characters it has already seen before to create a new character — just how babies learn rich concepts from limited data when they’re confronted with a character or object they’ve never seen before.
However, the software is still incapable of mimicking the way children learn autonomously by forming original hypotheses. The monumental shift in AI potential will take place when researchers are able to design software that is capable of original hypotheses and authentic goals, like generating the desire to recognize the handwritten characters instead of following instructions from a researcher. Without self-driven goals, AI systems limit their potential to function autonomously.
“Continuously learning with more and more data is what any AI system wants to do,” says Tenenbaum. “But learning autonomously is trickier. There’s always a human who sets up the whole thing, how much and what kind of data to give them. But babies make the choices for themselves. It’s still an open challenge for AI to be much more autonomous structuring its own learning processes. Current AI systems don’t have any goals at all, and they can’t be in charge of their own learning if they don’t really have any goals. When a robot is instructed to pick up a box, it’s very tempting to look at that robot and say it’s doing what humans are doing. But they don’t have the sophisticated level of thinking that children do.”
Tenenbaum and his colleagues employed deep learning algorithms modeled on a virtual network of neurons. It produces a very rough imitation of the way the human brain works. When a machine processes an object, it searches its massive data set for the pixels to match up with the machine in order to make an identification. Humans, on the other hand, rely on higher forms of cognitive function in order to interpret the contents of an object.
“We’re trying to write computer programs, which are like the software of the brain, which we would normally call the mind. The mind is the software and it runs on the brain’s hardware and we try to build levels at the software level. Neural networks in AI are computer programs at a software level.”
In 2013, the National Science Foundation awarded MIT a five-year, $25 million grant to establish the Center for Brains, Minds and Machines. Scientists and engineers across different fields work together to learn how the brain performs complex computations in the hopes of building intelligent machines that more closely resemble human intelligence.
“It’s only recently we’ve built the math and the computational models that can do this,” says Tenenbaum. “We’re going to need a lot more resources, smarts, companies, knowhow, and companies’ interests as well as faster computers probably. We might need to wait or rely on other engineering progress before we can capture the intelligence of even very young children.”
Building The First Baby’s Brain
University of Auckland’s Bioengineering Institute in New Zealand is working to close the gap between brain and machine with an animated and interactive baby. Mark Sagar, the director and founder of the Institute’s Laboratory for Animate Technologies and a multi-Academy Award winner for his animation work in Avatar and King Kong , spends his days in the lab playing peek-a-boo with a blonde baby on a 3D computer screen known as BabyX — a live system that learns, thinks, and creates its own facial expressions and reactions on its own.
Sagar began his career by building medical simulations of body parts at MIT, where he worked on bringing digital faces to life, and has used those skills to develop BabyX. The animated artificial intelligence is able to mimic his facial expressions, read simple words aloud, recognize objects, and play the classic videogame Pong, making it smarter every day. BabyX is not only Sagar’s brain child, but is also modeled after his own daughter Francesca at different ages.
To construct BabyX, Sagar scanned his daughter at 6 months, 18 months and 24 months old, which is then uploaded onto their system. He chose to replicate his daughter’s behaviors, facial expressions, and sounds through the animated technology as a metaphor for the infancy of artificial intelligence. Sagar affectionately refers to BabyX as “she” and explains how she functions using fiber optic cables, which are driven by her simulated neural activity — like a spinal cord connected to a brain. Because it is an interactive avatar with artificial intelligence, BabyX has the ability to learn and retain information, unlike systems before it.
“In our case we are not developing artificial intelligence in the way that most people are looking at it,” Sagar told IBT. “There are many contested theories in neuroscience and cognitive science and current knowledge probably represents the tip of the iceberg. The most difficult part — but also one of the most deeply interesting parts — about biologically inspired approaches is how higher levels of cognition might emerge from the interaction of processes operating at different scales.”
Sagar and his team tested BabyX against human interaction. BabyX is able to process human emotions, understand the meaning behind their actions and respond according to what she’s learned from past human interactions with Sagar. Behind BabyX’s on-screen face is a live simulation of a brain, giving it the ability to cue the facial simulation to blink and smile back at its audience. Sagar believes the face is key to developing effectual interactive artificial intelligence because it mirrors the brain and reveals the inner workings of the conscious mind. A simple smile, for example, is the result of a complex and interwoven system of connections within the brain.
“BabyX learns through association between the user’s actions and Baby’s actions,” says Sagar. “In one form of learning, babbling causes BabyX to explore her motor space, moving her face or arms. If the user responds similarly, then neurons representing BabyX’s actions begin to associate with neurons responding to the user’s action through a process called Hebbian learning. Neurons that fire together, wire together.”
After the process is repeated, the new neural connections begin to create a map inside of BabyX’s simulated brain that matches its actions to the user’s actions, setting up the stage for higher forms of imitation. The human brain works much the same way. By completing an action, the brain forms new connections which are strengthened by repetition.
Ultimately the simulated baby learns to develop a response on its own using the information her brain processed from the environment. BabyX is essentially learning through ever-improving code.
BabyX’s learning abilities run on biologically plausible learning models, which are algorithms that mimic and translate how the human brain processes information and releases chemical reactions in the brain like dopamine or oxytocin levels. When she doesn’t understand a word or action, BabyX expresses confusion, but when she reads a word correctly she giggles with joy and signals a higher release of the “happy hormone” dopamine. Each algorithm controls the neural systems that enable her to imitate, develop a reward system, and learn new information through interaction and demonstration.
“I wanted to explore how biologically based computational models of behavior, emotion and cognition could be integrated for animation, especially focused on the face,” says Sagar. “The face is a nexus for so many aspects of human experience. Exploring the fundamentals of learning and mental development may be critical to our future interaction with, and use of, more complex and autonomous technology.”
Because the face is a primary means for communication, Sagar hopes his baby will lay the foundation for future health and education applications, such as programs designed to interact with children with autism or other social impairments. A system that views a human’s emotions, processes them, and understands how they feel: that's the goal that drives AI research — to build a brain that can think on its own, just as we do from our earliest days.
© Copyright IBTimes 2024. All rights reserved.