Future Tense

Good News for Liars

New technologies for detecting untruths are as problematic as polygraphs.

A computerized polygraph machine being used in a simulated situation on February 26, 2007 in Moscow, Russia.
A computerized polygraph machine being used in a simulated situation on Feb. 26, 2007 in Moscow, Russia. As far as detecting actual lies, polygraphs produce too many false positives.

Photo by Dima Korotayev/Epsilon/Getty Images

In the Watergate tapes, President Richard M. Nixon expressed concern over leaks regarding the Strategic Arms Limitation Treaty talks. He told counselor John Erlichman that he was thinking of having hundreds of government employees undergo polygraph tests to pinpoint the source. “I don’t know anything about polygraphs,” said Nixon, “but I know they’ll scare the hell out of people.”

Nixon was right. Ample research demonstrates that when people are hooked up to a fake but realistic looking apparatus (wonderfully dubbed the “bogus pipeline to truth”), they are likely to tell the truth.

As far as detecting actual lies, however, polygraphs produce too many false positives—that is, they mistake too many truthful people for liars. In the eyes of a lie-detector examiner, innocent people can seem guilty. Under interrogation, they may become frightened, indignant, or agitated. Their hearts pound, their breath labors, and their palms sweat. They may even feel guilty. Conversely, liars are not necessarily anxious; this is especially true of psychopaths and other practiced liars, whose peripheral nervous systems are less responsive to threat than are most individuals’. At bottom, the polygraph is an arousal detector, not a lie detector.

If the body can’t be trusted to reliably betray its secrets, would going straight to the brain, the organ of deceit, be a better way to reveal deception?

One approach is to discover whether suspects are keeping information to themselves. The so-called guilty knowledge test (GKT) simply requires that suspects have a memory for details of the crime—precisely where it happened, what the victim was wearing, the weapon used, and so forth. If the suspect recognized these facts of the crime, a spike of activity registers on an electro-encephalograph. Thus, without saying a word, the suspect’s brain will supposedly implicate him.

In some respects, the brain-wave GKT is an improvement over the polygraph, but its major problem is that it relies on memory. But memory is not like a video-recorder, nor is it a repository for static recollections: It is often a spectacularly fallible instrument. At each stage of memory—encoding the event, storing it, creating a permanent record, or retrieving it—something can go awry. As a consequence, the GKT is subject to the opposite problem that bedevils the polygraph—false negatives, or liars whom the test deems innocent, People who commit crimes might “pass” a brain-wave interrogation simply because, in the heat of passion or rage, they did not note crucial details of the crime. And if something goes unnoticed, the brain cannot encode a memory. Even when details are encoded, they are not always stored permanently. They can undergo normal decay or become contaminated by both earlier and later memories.

Are there reliably different patterns of brain activation when people deceive than when they tell the truth? The short answer is: only in so-called mock-theft experiments in which researchers “plan” a burglary in the lab and enlist subjects as the thieves. For example, they instruct a subject to “steal” an item (e.g., a ring or a watch) from a desk drawer in the lab and then hide it in a locker. A member of the research team observes the mock-theft, so that the ground truth is known to someone. Next, the subject is told to lie to an examiner who asks about the item taken; meanwhile, his brain is being scanned to determine the veracity of his replies.

But it is not easy to generalize from what works in the lab to the real-world elements of lying. Among the reasons: An actual suspect accused of a crime faces an intensely emotional situation with high stakes. Thus it also incorporates the neural correlates of emotion and imagery that would not be found in a less fraught lab lie. Indeed, is it even possible for an instructed lie to be construed as deception in the ordinary sense? In addition, most lab subjects are happy to go along with the testing, whereas real suspects might try to beat the machine by moving their heads, humming, or silently performing multiplication in the hopes of distorting the imaging signal. In one study, investigators found that simply wiggling a single finger or toe could reduce the accuracy of lie detection from near perfect to one-third.

Perhaps the problem with creating machines to spot falsehoods is rooted in the very nature of lies. Scientists who have examined lies per se have found that different types activate different parts of the brain; not all lies are psychologically similar. In their seminal studies, psychologists Stephen Kosslyn and Giorgio Ganis focused on two types of lies: spontaneous lies and rehearsed, or memorized, lies. The latter, as the name implies, are those you are prepared to tell when your friend asks you if you are sticking to your diet. A prepared answer might be “I had a tiny salad” when the truth is that you had a burger and fries. Spontaneous lies are those you tell on the fly, as when your friend asks you whether you can give her annoying boyfriend a ride to the airport, and you say you can’t do it because your car is in the shop.

Kosslyn and Ganis hypothesized that when people tell rehearsed lies, they merely need to retrieve them from memory. A spontaneous lie, by contrast, takes more work. When your friend asks you to chauffeur her boyfriend, you must engage episodic memory (responsible for recalling events), such as your past dealings with the boyfriend, and semantic memory (responsible for recalling knowledge) to help manufacture the lie. Presumably, a spontaneous lie would be richer in detail, too, involving visual images or feelings that are encoded in various parts of the brain and thereby giving rise to a more complicated neural representation.

In their experiment, the researchers asked subjects to describe two experiences: their best job and their most memorable vacation. They asked the subjects to choose one of the two experiences, the job or the vacation, whichever they preferred, and to create an alternative version of it and memorize it. So, if the actual vacation was “My parents and I flew from Boston to Barcelona on Continental Airlines and stayed at the Granvia Hotel,” the altered version might be “My sister and I drove from Los Angeles to Mexico City and stayed in a hostel.” A student memorized the false version for about a week and returned to the lab to be scanned. During the scan, researchers told each student to make up some new (spontaneous) untruths on the fly. So subjects would lie on the spot when asked where they went on vacation and replace Mexico City with, say, Miami or respond “my aunt” when asked who their travel companion was. A parallel scenario was tested for the best job one ever had if students chose that option.

As the researchers predicted, different brain networks were engaged during spontaneous lying than during rehearsed lying, and both differ from those used during truth telling. Both involve memory processing, but when subjects lied spontaneously, their brains drew more heavily on the anterior cingulate cortex, which presumably facilitated the suppression of what otherwise would have been a truthful response. When their lies were rehearsed, a region in the right anterior prefrontal cortex (involved in retrieving episodic memory) was selectively activated. Truthful memories were the least effortful to produce, presumably because they were acquired naturally and did not require the kind of auditing and editing that spontaneous lies required.

The point is this: No brain region uniquely changes activity when a person lies; each type of lie requires its own set of neural processes. This is because lies are not all alike psychologically. Journalist Margaret Talbot offers a nuanced litany of lies based on motives: “small, polite lies; big, brazen, self- aggrandizing lies; lies to protect or enchant our children; lies that we don’t really acknowledge to ourselves as lies; complicated alibis that we spend days rehearsing.” Some lies are even told for the mere fun of fooling others, a practice psychologists call “duping delight.” And what about the “more-or-less honest omissions, exaggerations, shadings, fudgings, slantings, bendings, and hedgings” that are an omnipresent feature of litigation, one scholar asks?

Montaigne, the 16th-century French Renaissance essayist, reflected on the kaleidoscopic variety of deception: “The reverse side of the truth has a hundred thousand shapes and no defined limits.” Half a millennium later, researchers are beginning to discern some of those shapes. The lies you tell about yourself, for example, look different on brain scans from those you tell about others. A lie about, say, one’s house will rely on quite different cognitive functions than a lie about a future home, which will engage its own patterns of thought, emotion, and imagination. A lie that generates profound remorse won’t overlap fully, if at all, with the neural correlates of a glib fib. A lie about the future will differ in its neural correlates from one about the past. Montaigne was right: From the whitest of lies to the darkest of deceptions, “the reverse side of the truth has no defined limits.”

This essay is excerpted from Brainwashed: The Seductive Appeal of Mindless Neuroscience by Sally Satel and Scott Lilienfeld, out this week from Basic Books. Future Tense is a partnership of Slate, the New America Foundation, and Arizona State University.