Human Nature

Armed Robotry

I’ve been meaning to get back to this Cornelia Dean piece from last week’s NYT Science Times. It’s about one of my favorite topics : military robots. Except it confounds some of my assumptions, which makes it all the more worth thinking about.

First off: The “killing machines” I keep writing about are just drones. They’re fully controlled (except for malfunctions and weather) by human pilots. Dean is talking about something way more unnerving: machines that make their own killing decisions. I had assumed that for safety reasons, this kind of technology was still confined to the computer equivalent of drawing boards. Wrong. Army software contractor Ronald Arkin tells Dean that armed mechanical border guards are already on the job in Israel and South Korea. Here in the United States, the Army is paying Arkin and others to explore, among other things, how to design such robots to “operate within the bounds imposed by the warfighter.” In other words, before we give them guns, we’d better figure out how to keep them from screwing up royally or turning on us.

What’s really interesting about Arkin is that he directly contradicts my paranoid prejudice. It’s not the armed robots I should worry about. It’s the armed humans. Dean summarizes his argument:

In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.

His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior. Troops who were stressed, angry, anxious or mourning lost colleagues or who had handled dead bodies were more likely to say they had mistreated civilian noncombatants, the survey said.

That makes sense: In war, emotion is more hindrance than help. Same goes for my previous speculation that pilots will become more brutal as they’re insulated from physical risk. Arkin’s data suggest that in fact, exposure to physical risk makes troops more aggressive, not less. Again, the theory makes sense: You shoot first and ask questions later when failure to shoot jeopardizes your safety. Take the ego out of it make you a robot instead of a person and the self-protective instinct to shoot first disappears.

That leaves the problem of ethics. Hormones, mirror neurons, socialization, and love, among other things, make most people reluctant to kill one another. Robots lack these inputs. Will they be ruthless? Arkin’s answer , as related by Dean, is that “because rules like the Geneva Conventions are based on humane principles, building them into the machine’s mental architecture endows it with a kind of empathy.”

Well, I wouldn’t go that far. It’s not empathy, exactly. But maybe empathy isn’t so hot as a guide to behavior in combat. Maybe one lesson of the Army’s Iraq survey is that empathy too easily morphs into tribalism. Maybe mechanical soldiers programmed with ethical rules, like the machines of I, Robot , are more likely to behave decently.

But then comes the hitch: What happens when the grainy realities of war defy the simplicity of the robot’s program? What happens when the hard part isn’t restraining yourself from firing on civilians, but distinguishing them from enemy forces in the first place? That’s where Arkin’s dream bogs down. He admits it would be hard for robots to recognize physical changes that entail moral changes, such as an enemy fighter with a wound or a white flag. And that’s basic stuff compared to the multiplying subtleties of modern counterinsurgency. It’s not as though al-Qaida hands out uniforms. Is the guy with the backpack a student or a terrorist ? Is the woman across the street chubby or wearing a belt full of explosives ?

Here’s my preliminary take on Arkin’s idea: He’s right that we can and should substitute robots for humans in some lethal jobs. Where the categories are clear and cold reason is crucial, let the robots do the guarding and killing. But don’t give the early generations of robots any jobs that require nuanced judgments about who’s a bad guy and who isn’t. And be prepared for the bad guys to learn the loopholes in the robots’ algorithms. If the robots respect white flags, the terrorists will use white flags. If the robots presume women are civilians, the terrorists will use women. That’s what terrorists do: They study our habits and exploit them. It’s a human skill. And it will take humans, not robots, to defeat them.