The Tragedy of Common-Sense Morality: Our Intuition Is Not Good.

New Scientist
Stories from New Scientist.
Dec. 14 2013 6:45 AM

The Tragedy of Common-Sense Morality

Evolution didn’t equip us for modern judgments.

Child drowning.
We wouldn't neglect to save a child from drowning to save a $1,000 suit. But children on the other side of the world are desperately in need of food, yet donating money doesn't come with the same moral imperative.

Photo by Elena Cruz/Juanolvido/iStock/Thinkstock

Our instincts don't always serve us well. Moral psychologist Joshua Greene explains why, in the modern world, we need to figure out when to put our sense of right and wrong in manual mode. His new book is Moral Tribe: Emotion, Reason, and the Gap Between Us and Them.

Tiffany O’Callaghan: You say morality is more than it evolved to be. What do you mean?
Joshua Greene: Morality is essentially a suite of psychological mechanisms that enable us to cooperate. But, biologically at least, we only evolved to cooperate in a tribal way. Individuals who were more moral—more cooperative with those around them—could outcompete others who were not. However, we have the capacity to take a step back from this and ask what a more global morality would look like. Why are the lives of people on the other side of the world worth any less than those in my immediate community? Going through that reasoning process can allow our moral thinking to do something it never evolved to.

TO: So we need to be able to switch from intuitive morality to more considered responses? When should we use which system?
JG: When it’s a matter of me versus us, my interests versus those of others, our instincts do pretty well. They don't do as well when it’s us versus them, my group’s interests and values versus another group’s. Our moral intuitions didn’t evolve to solve that problem in an even-handed way. When groups disagree about the right thing to do, we need to slow down and shift into manual mode.

Advertisement

TO: Do we need a manual mode because our morals are dependent on culture and upbringing?
JG: When you share your moral common sense with people in your locality, that helps you to form a community. But those gut reactions differ between groups, making it harder to get along with other groups.

TO: And these differences result in what you call the “tragedy of common-sense morality”?
JG: Exactly. It is the modern moral problem, us versus them. When there is a conflict, which group’s sense of right and wrong should prevail? If a morality is a system that allows individuals to form a group and to get along with each other, then the challenge is to devise a system that allows different groups to get along—what I call a meta-morality.

TO: You propose utilitarianism, which aims to maximize everyone’s happiness impartially. The idea has been around since the 1700s. What’s different now?
JG: We now have a better biological and psychological understanding of our moral thinking. We can do experiments that reveal its quirks and inconsistencies. The idea that we should do what maximizes happiness sounds very reasonable, but it often conflicts with our gut reactions. Philosophers have spent the last century or so finding examples where our intuition runs counter to this idea and have taken these as signals that something is wrong with this philosophy. But when you look at the psychology behind those examples, they become less compelling. An alternative is that our gut reactions are not always reliable.

TO: Some of your studies use brain imaging. What can this reveal about decision-making and how do we avoid reading too much into the results?
JG: Since functional brain imaging first emerged, we have learned that there aren’t very many brain regions uniquely responsible for specific tasks; most complex tasks engage many if not all of the brain’s major networks. So it is fairly hard to make general psychological inferences just from brain data.

That said, there are some things you can do. In a 2010 study, Amitai Shenhav and I had people make moral judgments involving trade-offs, where you can save one person for sure, or possibly save some number of people with varying probability. We found that the brain regions responsible for assigning values in these moral judgments are ones that perform the same function more generally, for example, when making decisions about food or money. This indicates that we are using general-purpose valuation mechanisms, and that may matter.

TO: Why does the particular mechanism we use to judge moral values matter?
JG: In the study I just mentioned, we saw that as the number of lives you can save goes up, people care less and less about each one. Why is that? The neural circuitry we inherited from our mammalian ancestors might offer an explanation. If you're a monkey making a decision about which food to forage for, the more food there is available, the more each bit of it diminishes in value. There’s only so much you can eat. The thing is, we are using that same kind of process to think about things like saving lives. So an experiment that implicates our basic mammalian valuation mechanisms in making judgments about saving people’s lives can give you an explanation for why we show this pattern, and give us reason to question our intuitive judgment.

  Slate Plus
Slate Picks
Nov. 21 2014 1:38 PM What Happened at Slate This Week? See if you can keep pace with the copy desk, Slate’s most comprehensive reading team.