The Big Questions Raised by the Coming Robotics Revolution

What's to come?
Feb. 16 2012 10:06 AM

The Big Robot Questions

The social, legal, and ethical problems posed by the coming robotics revolution.

A humanoid robot.
Humanoid robots can be synched to imitate the motions of a real human being.

Photograph by Yoshikazu Tsuno/AFP/Getty Images.

Sometimes, the creation is better than its creator. Robots today perform surgeries, shoot people, fly planes, drive cars, replace astronauts, baby-sit kids, build cars, fold laundry, have sex, and can even eat (but not human bodies, the manufacturer insists). They might not always do these tasks well, but they are improving rapidly. In exchange for such irresistible benefits, the Robotic Revolution also demands that we adapt to new risks and responsibilities.

This adaptation to new technology is nothing new. The Industrial Revolution brought great benefits and challenges too, from affordable consumer goods to manufacturing pollution. Likewise, we’re reaping the benefits of the Computer Revolution but also still sorting out ethics and policy arising from it, such as online privacy and intellectual property rights.

If you believe Bill Gates, who says that the robotics industry is now at the point the computer industry was 30 years ago, then we’ll be soon grappling with difficult questions about how to build robots into our society. Here are three key fronts we’ll need to defend.

Advertisement

Safety and Errors

As any fallible human would be, roboticists and computer scientists are challenged in creating a perfect piece of very complex software. Somewhere in the millions of lines of code, typically written by teams of programmers, errors and vulnerabilities are likely lurking. While this usually does not result in significant harm with, say, office applications, even a tiny software flaw in machinery, such as a car or a robot, could potentially result in fatalities.

For instance, in August 2010, the U.S. military lost control of a helicopter drone during a test flight. For more than 30 minutes and 23 miles, it veered toward Washington, D.C., violating airspace restrictions meant to protect the White House and other governmental assets. In October 2007, a semi-autonomous robotic canon deployed by the South African army malfunctioned, killing nine “friendly” soldiers and wounding 14 others. Experts continue to worry about whether it is humanly possible to create software sophisticated enough for armed military robots to discriminate combatants from noncombatants, as well as threatening behavior from nonthreatening.

Never mind the many other military-robot accidents and failures, human deaths can and have occurred in civilian society. The first human to be killed by a robot is widely believed to have happened in 1979, in an auto factory accident in the United States. It doesn’t take much to reasonably foresee that a mobile city-robot or autonomous car of the future—a heavy piece of machinery—could also be involved in a tragedy, such as accidentally run over a small child.

Hacking is an associated concern. What makes a robot useful—its strength, ability to access and operate in difficult environments, expendability, etc.—could also be turned against us. This issue will become more important as robots become networked and more indispensible to everyday life, as computers and smartphones are today. Already more than 50 nations have developed military robotics, including Iran and China; this past week, North Korea reported that it bought several (older) military aerial drones that the United States had previously sold to Syria.

Thus, some of the questions we will face in this area include:

  • Is it even possible for us to create machine intelligence that can make nuanced distinctions, such as between a gun and an ice-cream cone pointed at it, or understand human speech that is often heavily based on context?
  • What are the trade-offs between nonprogramming solutions for safety—e.g., weak actuators, soft robotic limbs or bodies, using only nonlethal weapons, or using robots in only specific situations, such as a “kill box” in which all humans are presumed to be enemy targets—and the limitations they create?
  • How safe ought robots be prior to their introduction into the marketplace or society?
  • How would we balance the need to safeguard robots from running amok with the need to protect them from hacking or capture?

Law and Ethics

If a robot does make a mistake, it may be unclear who is responsible for any resulting harm. Product liability laws are largely untested in robotics and, anyway, continue to evolve in a direction that releases manufacturers from responsibility. With military robots, for instance, there is a list of characters throughout the supply chain who may be held accountable: the programmer, the manufacturer, the weapons legal-review team, the military procurement officer, the field commander, the robot’s handler, and even the president of the United States.

As robots become more autonomous, it may be plausible to assign responsibility to the robot itself, if it is able to exhibit enough of the features that typically define personhood. If this seems too far-fetched, consider that there is ongoing work in integrating computers and robotics with biological brains. A conscious human brain (and its body) presumably has human rights, and if we can replace parts of the brain with something else and not impair its critical functions, then we could continue those rights in something that is not fully human. We may come to a point where more than half of the brain or body is artificial, making the organism more robotic than human, which makes the issue of robot rights more plausible.

One natural way to think about minimizing risk of robotic harm is to program them to obey our laws or follow a code of ethics. Of course, this is much easier said than done, since laws can be too vague and context-sensitive for robots to understand, at least in the foreseeable future. Even the three (or four) laws of robotics in Isaac Asimov’s stories, as elegant and adequate as they first appear to be, fail to close many loopholes that result in harm.

Programming aside, the use of robots must also comply with existing law and ethics. And again, those rules and norms may be unclear or untested with respect to robots. For instance, the use of military robots may raise legal and ethical questions that we have yet to fully consider and that, in retrospect, may seem obviously unethical or unlawful.

Privacy is another legal concern here, most commonly related to spy drones and cyborg insects. But advancing biometrics capabilities and sensors can also empower robots to conduct intimate surveillance at a distance, such as detecting faces as well as hidden drugs and weapons on unaware targets. If linked to databases, these mechanical spies could run background checks on an individual’s driving, medical, banking, shopping, or other records to determine whether the person should be apprehended. Domestic robots, too, can be easily equipped with surveillance devices—as home security robots already are—that could be monitored or accessed by third parties.

Thus, some of the questions in this area include:

  • Are there unique legal or moral hazards in designing machines that can autonomously kill people? Or should robots merely be considered tools, such as guns and computers, and regulated accordingly?
  • Are we ethically allowed to give away our caretaking responsibility for our elderly and children to machines, which seem to be a poor substitute for human companionship (but perhaps better than no—or abusive—companionship)?
  • Will robotic companionship for other purposes, such as drinking buddies, pets, or sex partners, be morally problematic?
  • At what point should we consider a robot to be a “person,” eligible for rights and responsibilities? If that point is reached, will we need to emancipate our robot “slaves”?
  • As they develop enhanced capacities, should cyborgs have a different legal status than ordinary humans? Consider that we adults assert authority over children on the grounds that we’re more capable.
  • At what point does technology-mediated surveillance count as a “search,” which would generally require a judicial warrant?
  Slate Plus
Slate Picks
Dec. 19 2014 4:15 PM What Happened at Slate This Week? Staff writer Lily Hay Newman shares what stories intrigued her at the magazine this week.