Truthiness research: Cognitive biases for simple, clear, conservative messages.

Conservative Beliefs Make a Lot More Sense When You’re Not Paying Attention

Conservative Beliefs Make a Lot More Sense When You’re Not Paying Attention

The state of the universe.
Sept. 3 2014 8:22 AM

The Science of Truthiness

Conservative beliefs make a lot more sense when you’re not paying attention.

Photo by Frank Micelotta/Getty Images
Stephen Colbert coined the term truthiness. Above, Colbert in 2004.

Photo by Frank Micelotta/Getty Images

A bumper sticker was popular in the city where I went to college. It was yellow, with large black print that read: “Mopeds are dangerous.” Beneath the text was the blocky silhouette of a moped and nothing else. The sticker didn’t illustrate the claim that mopeds were dangerous—it didn’t show a moped crumpled against a tree or running someone over—but it was eye-catching, the yellow contrasting sharply with the black, and on message. I believed that bumper sticker, and still do, for all that I’ve rarely encountered a moped or read about a moped accident or even really grasped the difference between a moped and a Segway.

Katy Waldman Katy Waldman

Katy Waldman is Slate’s words correspondent. 

Truthiness is the word Stephen Colbert coined to describe the intuitive, not always rational feeling we get that something is just right. Are mopeds dangerous? Sure, if by dangerous you mean significantly riskier than cars but slightly less direful than motorcycles. They are not dangerous compared to smoking a lot of cigarettes or owning a gun. The point is that, while nothing about the bumper sticker backed up its ominous claim, I automatically accepted it.

Truthiness is “truth that comes from the gut, not books,” Colbert said in 2005. The word became a lexical prize jewel for Frank Rich, who alluded to it in multiple columns, including one in which he accused John McCain’s 2008 campaign of trying to “envelop the entire presidential race in a thick fog of truthiness.” Scientists who study the phenomenon now also use the term. It humorously captures how, as cognitive psychologist Eryn Newman put it, “smart, sophisticated people” can go awry on questions of fact.


Newman, who works out of the University of California–Irvine, recently uncovered an unsettling precondition for truthiness: The less effort it takes to process a factual claim, the more accurate it seems. When we fluidly and frictionlessly absorb a piece of information, one that perhaps snaps neatly onto our existing belief structures, we are filled with a sense of comfort, familiarity, and trust. The information strikes us as credible, and we are more likely to affirm it—whether or not we should.

How do our brains decide which assertions seem most lucid and credible? One way to ease cognitive processing is to hoop an idea with relevant details. This is similar to what happens with priming: Barrage people with words like leash, collar, tail, and paw and then ask them for a word that rhymes with smog. They’re much likelier to fetch the word dog than, say, bog or agog, because the neural nebula containing Fido is already active. In one experiment, people who read the sentence “The stormy seas tossed the boat” were more prone than those who read the sentence “He saved up his money and bought a boat” to report they’d come across the word boat in a previous exercise, whether they really had or not. The “semantically predictive” statement, salted with seafaring concepts like storm, allowed readers to anticipate the kicker: boat. Then, when the boat appeared, they processed the word so fluently they assumed they must have encountered it previously—the low cognitive effort created an illusion of familiarity.

Photographs can also reduce the amount of cognitive effort needed to understand a claim. This easy processing beguiles us into viewing the claim as friendly, familiar, and correct. In one study, Newman and her colleagues showed volunteers, who were mostly college students, names of politicians and other moderately famous people they were unfamiliar with. Half of the names belonged to living celebrities and half to dead ones. Some names were paired with images of the person they named; some stood alone. One team of participants was asked to assess the truth of the statement “This celebrity is alive,” while the second team did the same for the claim “This celebrity is dead.”

The researchers found that people in both groups more often credited the statement when a picture accompanied it. Newman wasn’t too surprised that photos increased the truthiness of “alive” claims—we trust photography as a medium to document reality, and the images depicted the celebrities as animate humans in the world. What fascinated her, she wrote in her paper, was how “the same photos also inflated the truthiness of ‘dead’ claims: The photos did not produce an ‘alive bias’ but a ‘truth bias.’ ”