RoboCop remake gets robot ethics completely wrong.

The New RoboCop Gets Robot Ethics Completely Wrong

Future Tense
The Citizen's Guide to the Future
Feb. 14 2014 12:25 PM

The New RoboCop Gets Robot Ethics Completely Wrong

Gary Oldman and Joel Kinnaman in RoboCop.
Gary Oldman and Joel Kinnaman in RoboCop.

Photo courtesy Kerry Hayes/Columbia Pictures Industries, Inc./Metro-Goldwyn-Mayer Pictures Inc.

Warning: This post contains spoilers about RoboCop.

Wednesday marked the release of RoboCop, a remake of the 1987 science fiction classic, and so far critics have been underwhelmed. Putting artistic value aside, however, the movie does convey the gist of some current debates on the legal and ethical aspects of robots, particularly ones capable of making decisions to kill humans in warfare or law enforcement. The fictional robotics company featured in the movie, OmniCorp, is a case study in how to get robot ethics completely wrong. To illustrate this point, I'll compare the company's behavior to the five ethical principles for robotics developed by researchers in the United Kingdom between 2010 and 2011.

“Robots should not be designed as weapons, except for national security reasons.”
OmniCorp designs and manufactures lethal autonomous robots used to keep the peace around the world—everywhere but the United States, much to the chagrin of the corporate leadership. The company sees a major growth opportunity in expanding to the domestic law enforcement market, and RoboCop is envisioned as a way to put a human face on the company's ambitions and counter Americans' “robophobia.” Sen. Hubert Dreyfus, named after the real-life philosopher and critic of AI research, pushes legislation to prevent robots from ever being allowed to make the “kill decision” because, he argues, they can't understand the value of human life. Mirroring real-life advocates of lethal autonomous robot research, OmniCorp and their supporters (including TV pundit Pat Novak, played by Samuel L. Jackson) emphasize the need to save American soldiers' and cops' lives by replacing them with robots in dangerous situations, and the fact that robots aren't subject to emotions like anger and prejudice. Like the Senate in RoboCop, the researchers that drafted the five principles discussed here were divided about the "except for national security" clause. 


“Robots should be designed and operated to comply with existing law, including privacy.”
OmniCorp routinely prioritizes profit over ethics. For example, RoboCop's cyborg brain is designed in a way that prevents him from arresting the senior leadership of the company, allowing them to use him as a tool for their own purposes with no accountability. Likewise, there does not seem to be any concern for privacy: RoboCop can peruse decades of uncensored surveillance camera footage from around the city in order to track down criminals (or purported criminals). The connections between robotics, big data, and privacy are currently being investigated by legal, ethical, and technical experts, so hopefully the OmniCorps of the future will heed the findings of this research (and not deliberately circumvent the associated regulations).

“Robots are products: as with other products, they should be designed to be safe and secure.”
OmniCorp forgoes rigorous testing in order to get the “product” to market as soon as possible and win over the public. Can RoboCop efficiently shoot and kill in virtual reality or a training facility? That's good enough for OmniCorp. Fortunately, in the real world, the U.S. military is aware that going about it that way would be stupid, and research on the trustworthiness and verification of autonomous systems' designs is currently being funded by military research agencies.

“Robots are manufactured artifacts: the illusion of emotions and intent should not be used to exploit vulnerable users.”
RoboCop's purpose (from OmniCorp's perspective) is to put a human face on roboticized law enforcement and convince the public to remove restrictions on police robots. So, needless to say, OmniCorp is being intentionally deceptive. Worse, it keeps its plans secret from the man himself: RoboCop's brain-machine interface is designed to create the illusion of free will when in combat situations, while OmniCorp's AI system literally calls the shots.

“It should be possible to find out who is responsible for any robot.”
OmniCorp, in both movies, blames technical malfunctions for anything that goes wrong in RoboCop's behavior. In real life, related issues are being explored by researchers—how, for example, do we ensure that humans remain responsible for kill decisions in the military when the systems shooting become more autonomous? This raises tricky questions about the connections between causal, ethical, and legal responsibility, none of which OmniCorp seems to put much thought into.

With Google reportedly setting up an ethics board to address the societal aspects of the AI technologies it’s developing, RoboCop's release and the issues it touches on are timely. It may not win any awards, but it does, like some of the best science fiction, present a vivid demonstration of the sort of future we should try to avoid.

Future Tense is a partnership of SlateNew America, and Arizona State University.

Miles Brundage is a Ph.D. student in Human and Social Dimensions of Science and Technology at Arizona State University.

  Slate Plus
June 29 2015 11:55 AM Supreme Court Blogs, TV Recaps, and Wikipedia  What a legal-affairs writer visits on the Internet more than 20 times a month.