Future Tense

How Treating Robots Like Children Is Changing A.I.

Wait, which hole does the square peg go in again?

Photo courtesy of University of California, Berkeley.

For the most part, robotics technologies are still in their infancy. Even the most sophisticated automata are literal toddlers, stumbling and tumbling as they make their way through a world they can’t yet understand. Maybe that’s why some artificial-intelligence researchers have begun to treat their creations like infants, teaching them as one might a baby, letting them learn through the very stumbles and tumbles that once held them back.

That’s the story Jack Clark tells in a thoroughly reported story for Bloomberg Businessweek. Clark focuses primarily on the University of California, Berkeley, lab of Pieter Abbeel, where researchers have set up what amounts to a nursery for a robot known as Brett. Resembling “a cross between a toaster and a George Foreman grill,” Brett—the name is short for Berkeley Robot for the Elimination of Tedious Tasks—is learning to interact with its environment by playing with the kind of toys you might find in a preschool. The photographs in Clark’s story show Brett manipulating large Lego-like bricks, toy airplanes, and more. As it does so, it’s literally learning how things fit together. What’s more, it’s doing so under its own steam, apparently receiving only the most general directions from its human handlers as it teaches itself through trial and error.

This patina of childhood exploration and discovery belies the real complexity of the project. Clark explains that Brett functions through a sophisticated combination of deep learning and reinforcement learning. Deep learning, a technology that’s been widely applied by A.I. researchers, helps Brett perceive and make sense of its surroundings. As Slate’s David Auerbach wrote when discussing Google’s application of the technology, deep learning “doesn’t actually have all that much to do with human learning.” In this case, it’s a neural network that can iteratively rewrite itself as it works to solve a problem. Reinforcement learning, Clark explains, “trains the robot to improve its approach to tasks through repeated attempts”—it’s a bit closer to the way children learn. He says that Abbeel was “partially inspired by child psychology tapes” in which “young children constantly adjust their approaches when solving tasks.”

According to Clark, Brett fuses these two systems in a novel fashion. It’s not, however, the first time that someone has at least imagined the possibility of educating an A.I. instead of simply creating it. One revealing, if little discussed, example appears in Richard Powers’ 1995 meta-fictional novel Galatea 2.2. A retelling of the Pygmalion story, Galatea 2.2 follows a novelist—also named Richard Powers—as he attempts to help a computer make sense of literature by introducing it to a carefully sequenced curriculum of classic works.

Teaching a fictional machine how to analyze textual form is, of course, not the same as helping a real one grasp that square pegs don’t fit into round holes. Nevertheless, Powers’ novel may clarify the scope of projects like Abbeel’s. In one especially evocative passage, Powers writes that the computer, which acquires the name Helen, “was strange. Stranger than I was capable of imagining. She sped laugh-free through Green Eggs and Ham, stayed dry-eyed at Make Way for Ducklings, feared not throughout Where the Wild Things Are. … The symbols these shameless simulations played on had no heft or weight for her, no real-world referent.” Though Helen is like a child, she experiences childhood’s trappings according to an altogether different logic.

Powers’ novel remains a meaningful reminder that artificial intelligence will probably never correspond to human intelligence. Just because a machine can learn to understand the world doesn’t mean it will ever understand that world in the same way that we do. Human children treat their toys as symbols for the real objects to which they correspond. Playing with those toys, they anticipate their own future power, even as they develop that power through their play. For Brett, however, Duplo bricks are presumably just Duplo bricks.

In other words, it’s important to remember that A.I. is, and will likely remain, deeply alien. In his article on Google’s DeepDream, Auerbach chastises those who describe what computers do in human terms. A machine that can produce images that look hallucinatory to our eyes, he observes, isn’t really hallucinating in any meaningful sense. In fact, the mechanics of the neural networks that drive our most promising A.I. creations are bound to make them less like us, even as they seem to become smarter.