Future Tense

The Robots Are Coming

We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.

This article arises from Future Tense,a collaboration among Arizona State University, the New America Foundation, and Slate.

President Obama at the National Robotics Engineering Center

Late last week, President Obama visited Carnegie Mellon University’s National Robotics Engineering Center to announce up to $70 million to fund the National Robotics Initiative. In his remarks, Obama quipped, “You might not know this, but one of my responsibilities as commander-in-chief is to keep an eye on robots. And I’m pleased to report that the robots you manufacture here seem peaceful—at least for now.”

We all love a good robot-apocalypse joke. After IBM’s Jeopardy!-playing computer Watson beat the game show’s reigning human champions, Fox News declared, “Our robot overlord isn’t named HAL or SkyNET—it’s Watson.” The same jokes cropped up in 2007 when robots began to take the place of child jockeys on the camel-racing circuit, learned to juggle, panhandle, and buy scones. But not many people actually believe this is a threat. For all their advances, robots are still generally able to execute only those tasks that they are specifically programmed to carry out. But as the speed of robot advances increases—and with Obama’s new National Robotics Initiative, the developments will, he hopes, come that much quicker—there are genuine robot-safety discussions that we need to have—not about them working too well and taking over civilization, but about them not working well enough.

But we’re so enamored with the robot-attack story line that it can skew the way real robot-safety issues are discussed. Take this case from Sweden, for example. According to a translated news story in the Local, “A Swedish company has been fined 25,000 kronor ($3,000) after a malfunctioning robot attacked and almost killed one of its workers at a factory north of Stockholm.” (Emphasis mine.) The worker was attempting to fix a malfunctioning robot and thought he had cut the machine’s power supply. But the robot suddenly came to life and grabbed a tight hold of the victim’s head.”

This robot’s purpose was to lift heavy rocks, and apparently it mistook the man’s head for such a rock. This wasn’t an act of malice or an “attack”; it was just a machine malfunctioning like any other. Such mishaps are serious safety concerns. But when a safety failure is instead described as an assault, even in a science blog, even facetiously, the conversation changes to something entirely unproductive.

And it’s certainly not the first time a human has been harmed because a robot malfunctioned or because someone wasn’t familiar with the machinery. In a 2004 post about robot accidents, Jeff Fryman of the Robotic Industries Association quoted a worker as saying, “I was trying to un-jam the cams and as I got the jam out, the robot cycled and squeezed me against the machine. I did not know it had to be turned off.” Industrial robots aren’t the only ones we should be talking about from a safety standpoint, either. “What happens if a robot’s motors stop working, or it suffers a system failure just as it is performing heart surgery or handing you a cup of hot coffee?” the Economist asked in 2006. How do we make sure that helper robots like those proliferating in Japan don’t injure someone if they lose power? How do we program moveable robots not to roll over someone’s foot? If we’re meant to be letting these things baby-sit our elderly (though seniors aren’t quite warming up to the idea), it’s at least worth bringing up. Some ways to create “human friendly robots” (not to be confused with “friendly AI,” which is intended to keep robots from superseding us) discussed by academics, such as in this Stanford presentation, include “dependable and safe,” “impact reduction skin,” “low reflected ineritia,” and “distributed sensing.”

Even those in the robotics trenches find it tough to resist the fun of the machine-apocalypse talk. Take Carnegie Mellon alum Daniel H. Wilson, who has a Ph.D. in robotics and is the author of the new book Robopocalypse (scheduled to be made into a 2013 film directed by Steven Spielberg). In Robopocalypse, Wilson portrays a near future in which the world’s helper robots, self-driving cars, and even smart toys take part in a coordinated revolt that leaves humans enslaved and marginalized. What made this robotics expert start writing about cybernetic revolt? Frustration with the fixation on evil robots. Robopocalypse follows his 2005 book, How To Survive a Robot Uprising: Tips on Defending Yourself Against the Coming Rebellion. He began writing the survival guide “as a joke” while he was a Ph.D. candidate, he says. “I was surrounded by roboticists, and everyone is out to do good, all trying to solve problems and help people, yet robots are always portrayed as evil. So I said, ‘Fine, I’ll take this seriously … in order to make fun of it all.’ Ultimately, I’m aware of the irony.”

At the end of his remarks, Obama said, “As futuristic and, let’s face it, as cool as some of this stuff is, as much as we are planning for America’s future, this partnership is about new, cutting-edge ideas to create new jobs, spark new breakthroughs, reinvigorate American manufacturing today. Right now. Not somewhere off in the future—right now.” He’s right. There are exciting things happening in robotics. There are also important safety and ethics issues cropping up with these new developments.

When we see a robot doing something ridiculous like singing pop songs, the “I for one welcome our bubblegum robot overlords” lines are almost too easy to ignore. It sounds ridiculous, but someone needs to think about keeping that robot poptart from falling into the crowd if the power cord gets tripped. If we ignore that mundane concern, the robots are going to defeat us sooner than we think.