Future Tense

Digital Jiminy Crickets

Do apps that promote ethical behavior diminish our ability to make just decisions?

Digital Ethics Cartoon.

John Mix.

As if we didn’t already have enough reasons to distrust Wall Street, a new study finds that a troubling number of financial services professionals would rather bury a moral compass than use one. Twenty-four percent of participants attested that “unethical or illegal behavior could help people in their industry be successful.” Would Main Street be better off if this greed were curtailed by behavioral-steering technology—digital Jiminy Crickets?       

In the classic story Le avventure di Pinocchi, Pinocchio learns that the essential difference between machines—an animated puppet—and real people is moral conscience. Though insignificant in Collodi’s novel, Jiminy Cricket serves as an external moral compass for Disney’s Pinocchio, following our hero through his adventures to tell him right from wrong. Pinocchio only develops moral maturity when he frees himself from the cricket’s advice and grasps how to make ethical decisions on his own.

Smartphones regularly function as extended minds that supersize recall, perform mathematics, and correct spelling. So why not go a step further down the enhancement highway and make your phone your own personalized, digital Jiminy Cricket?

A recent crush of smartphone and tablet apps claim to make hard decisions easier, and the range of ethical dilemmas they can weigh in on will only increase. At this rate, Siri 5.0 may be less a personal assistant than an always-available guide to moral behavior. But depending on a digital Jiminy Cricket may be a regressive step away from what makes us all real.

Want to raise your green game beyond the superficial grocery store choice of paper, plastic, or cloth? Use iRecyle and find out where to dispose of electronic goods, paint, metal, and hazardous material. Want to consume conscientiously? Use the GoodGuide mobile app or Shop Ethical! 2012 and you’ll put your values where your wallet is, without getting swindled by misleading corporate greenwashing. Have an on-the-job quandary that you don’t want to share with colleagues? Just look for a niche app. The New York State Bar Association Mobile Ethics App gives “judges, lawyers and law students access to instant ethics advice from their smartphones.”

Ethics apps do more than present users with relevant, sometimes hard-to-obtain information. Like a coach, they also directly influence our choices, motivating us to eat better, exercise more, budget our money, and get more out of our free time. Users don’t see these tools as threats to free will, self-esteem, or sustainable habits. Instead, they’re downloading increasing amounts of software containing a “good-behavior layer” that helps users avoid self-sabotaging decisions, like impulse buying and snacking. Capitalizing on three inter-related movements—nudging, the quantified self, and gamification—the good-behavior layer pinpoints our mental and emotional weaknesses and steers us away from temptations that compromise long-term success.

In many cases, good-behavior technology gets the job done by bolstering resolve with digital willpower. By tweaking our responses with alluring and repulsive information, while also shielding us from distracting and demoralizing data, digital willpower helps us better control and redirect destructive urges. Apps like ToneCheck prevent us from sending off hotheaded emails, while GymPact inspires us to go the gym. Students are getting into the act, too, and developing apps to make their classmates more responsible, e.g., get to class on time and be less distracted. Arianna Huffington’s project “GPS for the soul” promises to analyze a user’s stress levels and provide overwhelmed people with rebalancing stimuli, like “music, or poetry, or breathing exercises, or photos of a person or place you love.” We’re already willing to delegate self-control to technology—and future developments will likely give devices even more ethical decision-making power.

Michael Schrage, research fellow at MIT Sloan School’s Center for Digital Business, gives us a glimpse into what the next generation of apps might do. While discussing potential developments in “promptware” platforms that cue ideal behavior (for instance, sense that we’re exhausted and recommend we should pause before making an important call), he notes that an app in the works will enable users to determine whether they speak too much in critical situations (like business meetings) and make real-time corrections to improve their performance. He speculates large-scale adoption might do more than change personal behavior. It could transform ethical norms—the very fabric of what members of a society expect from one another.

Young businessman with smartphone.
Should people rely on smartphone apps to help them make ethical decisions?

Photo courtesy of Thinkstock/nisimo/iStockphoto.

Currently, individuals are responsible for developing the skills necessary to communicate appropriately and self-correct when they stray from socially acceptable behavior. Promptware could invert this paradigm. When it becomes widely available, we could get criticized as irresponsible for not deferring to it: “It may be considered rude—and/or remarkably unprofessional—not to have your devices make sure you’re behaving yourself,” Schrage wrote in a Harvard Business Review blog post last November.

Presently, we use our own moral judgment—and carefully selected advisers—to determine whom to consider trustworthy. However, companies are already hard at work automating that judgment based on access to limited data. After Whit.li scans a user’s social media (like Facebook), it creates a psycho-social profile. According to the promotional material, that profile can be used to create trustworthy consumer-to-consumer interactions, like choosing whom to travel or live with. As our data trail continues to expand through increased social networking, more potent programs will be created. Perhaps they’ll enable massive multi-user rankings and produce a widely used profiling technology that has the feel of Rate My Professor meets Yelp. If you know your behavior is always subject to judgment, possibly even an instantaneous trust score, your social behavior could change profoundly.

As ethics apps continue to advance, so too will related technological enhancements. Twelve years ago, French theorist Bruno Latour wrote “Where Are the Missing Masses?” and essentially argued that cars embody morality when they are programmed not to start (or to beep incessantly) until the driver’s seatbelt is fastened. In the automobile industry, that example now seems archaic. Ford offers “Speed Limiter,” a feature that prevents drivers from exceeding a set speed, and is considering developing a car that “could help diabetic drivers by employing wireless sensors to monitor their glucose levels.” Nissan is experimenting with prototypes designed to detect when drivers are drunk, including one that “attempts to directly detect alcohol in the driver’s sweat.” Toyota is developing mood-reading technology that detects “if the driver is sad, happy, angry or neutral, before assessing how distracted they are likely to be as a result.”

While we may turn to disparate tools for guidance, they won’t soon coalesce into a single digital Jiminy Cricket app. At the moment, artificial intelligence is good at some things, but lousy at others. Mathematical models, statistical analysis methods, and reliable rules for acquiring and processing data are great for determining when to buy a plane ticket, send out a tweet, and drink coffee. But ethical dilemmas are special because they fundamentally concern what Aristotle called phronesis—well-informed, contextual judgment. Every parent has faced the immense challenge of teaching a child when it is OK to lie. White lies (which distort the truth to be polite) do battle with broken promises (which are justifiable in some cases). Fabrications (like rumors, which may or may not be true) collide with deceptive comments (which mislead by withholding facts), bluffs (miscues about what someone will do), and emergency lies (which can involve temporary deception to prevent harm).

To make the right judgment, you need to understand relevance and meaning, not match statistical frequencies. But as philosophers Hubert Dreyfus and Sean Kelly argue, computers currently lack this ability. That’s why IBM’s Watson wiped the floor with its Jeopardy! competitors but selected Toronto when faced with the following clue under the category “U.S. Cities”: “Its largest airport is named for a World War II hero; its second largest for a World War II battle.”       

Why, then, should we bother speculating about digital Jiminy Crickets? It’s an updated version of the age-old question of whether we lose something fundamental by allowing technology to do more for us. Understanding where diminution begins can help us determine how far behavior-modifying technology should go.

Critics have argued that calculators keep kids from developing math skills and complain that Google has shifted recall from focus on facts to sites where information is stored. Most of us are content with these changes. Outsourcing morality, however, is quite another matter. More of our fundamental humanity hangs in the balance.

The authors were supported by National Science Foundation funded project, “An Experiential Pedagogy for Sustainability Ethics” (#1134943). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.