Future Tense

Can Machines Ever Be “Moral”?

HOLLYWOOD - JUNE 09: C3PO and R2D2 speak onstage during the 33rd AFI Life Achievement Award tribute to George Lucas at the Kodak Theatre on June 9, 2005 in Hollywood, California.

Photo by Kevin Winter/Getty Images

Over the weekend, the New York Times dived into the contentious debate over programming ethical and moral machines. As the fields of robotics and artificial intelligence move from science fiction and lab toys to real-world tools, writes Indiana University’s Colin Allen, experts will have to decide whether, and how, to imbue their creations with better decision-making skills.

How long until the next bank robber will have an autonomous getaway vehicle? This is autonomy in the engineer’s sense, not the philosopher’s. The cars won’t have a sense of free will, not even an illusory one. They may select their own routes through the city but, for the foreseeable future, they won’t choose their own paths in the grand journey from dealership to junkyard. We don’t want our cars leaving us to join the Peace Corps, nor will they any time soon. But as the layers of software pile up between us and our machines, they are becoming increasingly independent of our direct control.

And if machines will be acting without our direct control, Allen argues, it is only responsible to make sure that they are more inclined to behave ethically than unethically. The autonomous getaway vehicle, however, seems a bit of a red herring of an example. Must any human interested in driving reveal not only the destination, but the plans at that destination, to her autonomous car, in case the driverless car might be unwittingly involved with a criminal scheme? The time between the release of a new technology and its adoption for malfeasance is historically short. Allen makes a strong case for building moral decision-making systems into AI and robotics. But with too many checks, such systems could potentially hobble new technologies as well.

Read more on the New York Times.