The Tragedy of Common-Sense Morality: Our Intuition Is Not Good.

Stories from New Scientist.
Dec. 14 2013 6:45 AM

The Tragedy of Common-Sense Morality

Evolution didn’t equip us for modern judgments.

Child drowning.
We wouldn't neglect to save a child from drowning to save a $1,000 suit. But children on the other side of the world are desperately in need of food, yet donating money doesn't come with the same moral imperative.

Photo by Elena Cruz/Juanolvido/iStock/Thinkstock

Our instincts don't always serve us well. Moral psychologist Joshua Greene explains why, in the modern world, we need to figure out when to put our sense of right and wrong in manual mode. His new book is Moral Tribe: Emotion, Reason, and the Gap Between Us and Them.

Tiffany O’Callaghan: You say morality is more than it evolved to be. What do you mean?
Joshua Greene: Morality is essentially a suite of psychological mechanisms that enable us to cooperate. But, biologically at least, we only evolved to cooperate in a tribal way. Individuals who were more moral—more cooperative with those around them—could outcompete others who were not. However, we have the capacity to take a step back from this and ask what a more global morality would look like. Why are the lives of people on the other side of the world worth any less than those in my immediate community? Going through that reasoning process can allow our moral thinking to do something it never evolved to.

TO: So we need to be able to switch from intuitive morality to more considered responses? When should we use which system?
JG: When it’s a matter of me versus us, my interests versus those of others, our instincts do pretty well. They don't do as well when it’s us versus them, my group’s interests and values versus another group’s. Our moral intuitions didn’t evolve to solve that problem in an even-handed way. When groups disagree about the right thing to do, we need to slow down and shift into manual mode.

TO: Do we need a manual mode because our morals are dependent on culture and upbringing?
JG: When you share your moral common sense with people in your locality, that helps you to form a community. But those gut reactions differ between groups, making it harder to get along with other groups.

TO: And these differences result in what you call the “tragedy of common-sense morality”?
JG: Exactly. It is the modern moral problem, us versus them. When there is a conflict, which group’s sense of right and wrong should prevail? If a morality is a system that allows individuals to form a group and to get along with each other, then the challenge is to devise a system that allows different groups to get along—what I call a meta-morality.

TO: You propose utilitarianism, which aims to maximize everyone’s happiness impartially. The idea has been around since the 1700s. What’s different now?
JG: We now have a better biological and psychological understanding of our moral thinking. We can do experiments that reveal its quirks and inconsistencies. The idea that we should do what maximizes happiness sounds very reasonable, but it often conflicts with our gut reactions. Philosophers have spent the last century or so finding examples where our intuition runs counter to this idea and have taken these as signals that something is wrong with this philosophy. But when you look at the psychology behind those examples, they become less compelling. An alternative is that our gut reactions are not always reliable.

TO: Some of your studies use brain imaging. What can this reveal about decision-making and how do we avoid reading too much into the results?
JG: Since functional brain imaging first emerged, we have learned that there aren’t very many brain regions uniquely responsible for specific tasks; most complex tasks engage many if not all of the brain’s major networks. So it is fairly hard to make general psychological inferences just from brain data.

That said, there are some things you can do. In a 2010 study, Amitai Shenhav and I had people make moral judgments involving trade-offs, where you can save one person for sure, or possibly save some number of people with varying probability. We found that the brain regions responsible for assigning values in these moral judgments are ones that perform the same function more generally, for example, when making decisions about food or money. This indicates that we are using general-purpose valuation mechanisms, and that may matter.

TO: Why does the particular mechanism we use to judge moral values matter?
JG: In the study I just mentioned, we saw that as the number of lives you can save goes up, people care less and less about each one. Why is that? The neural circuitry we inherited from our mammalian ancestors might offer an explanation. If you're a monkey making a decision about which food to forage for, the more food there is available, the more each bit of it diminishes in value. There’s only so much you can eat. The thing is, we are using that same kind of process to think about things like saving lives. So an experiment that implicates our basic mammalian valuation mechanisms in making judgments about saving people’s lives can give you an explanation for why we show this pattern, and give us reason to question our intuitive judgment.

TO: In what other ways should we question our intuition?
JG: Consider the dilemma philosopher Peter Singer posed four decades ago. You see a child drowning. You could save that child's life but, if you do, you will ruin your fancy $1,000 suit. Singer asked if it was OK to let the child drown. Most people say, of course not, that would be monstrous.

In another case, children on the other side of the world are desperately in need of food. By donating money, you could save their lives. Do you have an obligation to do that? Most people say that it’s nice if you do, but it’s not terrible if you instead choose to spend your money on luxuries for yourself. Most philosophers have taken those intuitions at face value and said, that’s right, there is a moral obligation when the child is right in front of you, but not on the other side of the world. But Singer asked, is there really a moral difference?

TO: So, is there a moral difference between helping people nearby and those far away?
JG: Psychology can help us answer that question. Jay Musen and I recently did a more controlled version of Singer’s experiment and got very similar results—distance made a difference. What does that mean? When you are thinking about whether you have an obligation to try to save people's lives, you don't usually think, well, how close by are they? Understanding what we are reacting to can change the way we think about the problem.

If, biologically, morality evolved to help us get along with individuals in our community, it makes sense that we have heartstrings that can be tugged—and that they are not going to be tugged very hard from far away. But does that make sense? From a more reflective moral perspective, that may just be a cognitive glitch.

TO: If we value everyone’s happiness equally, won’t we be overwhelmed by the suffering of others?
JG: Utilitarianism is inherently pragmatic—in fact, I prefer to call it “deep pragmatism.” Humans have real limitations, obligations, and frailties, so the best policy is to set reasonable goals, given your limitations. Just try to be a little less tribalistic.

TO: Given our evolutionary heritage, could we ever really adopt this meta-morality?
JG: There is no guarantee, but what is the alternative? To keep going with our gut reactions and pounding the table? To try to come up with some Kantian theory to deduce right and wrong from first principles, like moral mathematicians? The question is not, is this guaranteed to work? The question is, do you have a better idea?

This article originally appeared in New Scientist.

Tiffany O'Callaghan is the Culturelab editor at New Scientist.