Ethical decision-making apps damage our ability to make moral choices.

Do Apps Promoting Ethical Behavior Undermine Our Sense of Right and Wrong?

Do Apps Promoting Ethical Behavior Undermine Our Sense of Right and Wrong?

The citizen’s guide to the future.
July 13 2012 6:33 AM

Digital Jiminy Crickets

Do apps that promote ethical behavior diminish our ability to make just decisions?

(Continued from Page 1)

Michael Schrage, research fellow at MIT Sloan School’s Center for Digital Business, gives us a glimpse into what the next generation of apps might do. While discussing potential developments in “promptware” platforms that cue ideal behavior (for instance, sense that we’re exhausted and recommend we should pause before making an important call), he notes that an app in the works will enable users to determine whether they speak too much in critical situations (like business meetings) and make real-time corrections to improve their performance. He speculates large-scale adoption might do more than change personal behavior. It could transform ethical norms—the very fabric of what members of a society expect from one another.

Young businessman with smartphone.
Should people rely on smartphone apps to help them make ethical decisions?

Photo courtesy of Thinkstock/nisimo/iStockphoto.

Currently, individuals are responsible for developing the skills necessary to communicate appropriately and self-correct when they stray from socially acceptable behavior. Promptware could invert this paradigm. When it becomes widely available, we could get criticized as irresponsible for not deferring to it: “It may be considered rude—and/or remarkably unprofessional—not to have your devices make sure you're behaving yourself,” Schrage wrote in a Harvard Business Review blog post last November.

Presently, we use our own moral judgment—and carefully selected advisers—to determine whom to consider trustworthy. However, companies are already hard at work automating that judgment based on access to limited data. After scans a user’s social media (like Facebook), it creates a psycho-social profile. According to the promotional material, that profile can be used to create trustworthy consumer-to-consumer interactions, like choosing whom to travel or live with. As our data trail continues to expand through increased social networking, more potent programs will be created. Perhaps they’ll enable massive multi-user rankings and produce a widely used profiling technology that has the feel of Rate My Professor meets Yelp. If you know your behavior is always subject to judgment, possibly even an instantaneous trust score, your social behavior could change profoundly.


As ethics apps continue to advance, so too will related technological enhancements. Twelve years ago, French theorist Bruno Latour wrote “Where Are the Missing Masses?” and essentially argued that cars embody morality when they are programmed not to start (or to beep incessantly) until the driver’s seatbelt is fastened. In the automobile industry, that example now seems archaic. Ford offers “Speed Limiter,” a feature that prevents drivers from exceeding a set speed, and is considering developing a car that “could help diabetic drivers by employing wireless sensors to monitor their glucose levels.” Nissan is experimenting with prototypes designed to detect when drivers are drunk, including one that “attempts to directly detect alcohol in the driver's sweat.” Toyota is developing mood-reading technology that detects “if the driver is sad, happy, angry or neutral, before assessing how distracted they are likely to be as a result.”

While we may turn to disparate tools for guidance, they won’t soon coalesce into a single digital Jiminy Cricket app. At the moment, artificial intelligence is good at some things, but lousy at others. Mathematical models, statistical analysis methods, and reliable rules for acquiring and processing data are great for determining when to buy a plane ticket, send out a tweet, and drink coffee. But ethical dilemmas are special because they fundamentally concern what Aristotle called phronesis—well-informed, contextual judgment. Every parent has faced the immense challenge of teaching a child when it is OK to lie. White lies (which distort the truth to be polite) do battle with broken promises (which are justifiable in some cases). Fabrications (like rumors, which may or may not be true) collide with deceptive comments (which mislead by withholding facts), bluffs (miscues about what someone will do), and emergency lies (which can involve temporary deception to prevent harm).

To make the right judgment, you need to understand relevance and meaning, not match statistical frequencies. But as philosophers Hubert Dreyfus and Sean Kelly argue, computers currently lack this ability. That's why IBM’s Watson wiped the floor with its Jeopardy! competitors but selected Toronto when faced with the following clue under the category “U.S. Cities”: “Its largest airport is named for a World War II hero; its second largest for a World War II battle.”       

Why, then, should we bother speculating about digital Jiminy Crickets? It’s an updated version of the age-old question of whether we lose something fundamental by allowing technology to do more for us. Understanding where diminution begins can help us determine how far behavior-modifying technology should go.

Critics have argued that calculators keep kids from developing math skills and complain that Google has shifted recall from focus on facts to sites where information is stored. Most of us are content with these changes. Outsourcing morality, however, is quite another matter. More of our fundamental humanity hangs in the balance.

The authors were supported by National Science Foundation funded project, “An Experiential Pedagogy for Sustainability Ethics” (#1134943). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.

Evan Selinger is an associate professor of philosophy at Rochester Institute of Technology. He is also a fellow at the Institute for Ethics and Emerging Technology. Follow him on Twitter.

Thomas Seager is an associate professor at the School of Sustainable Engineering and the Built Environment and a Lincoln fellow of ethics and sustainability at Arizona State University.