The future of lying: What if computers can keep us from lying?

Will Computers Keep Us From Lying in the Future?

Will Computers Keep Us From Lying in the Future?

The citizen’s guide to the future.
Feb. 26 2013 8:45 AM
FROM SLATE, NEW AMERICA, AND ASU

The Future of Lying

Can society survive if computers can tell fact from fib?

What if computers could tell when we are lying?
What if computers could tell when we are lying?

Photo by Steve Lovegrove/iStockphoto/Thinkstock

This article arises from Future Tense, a partnership of Slate, the New America Foundation, and Arizona State University. On Feb. 28-March 2, Future Tense will be taking part in Emerge, an annual conference on ASU’s Tempe campus about what the future holds for humans. This year’s theme: the future of truth. Visit the Emerge website to learn more and to get your ticket.

Here’s something I bet we all believe: Lying is bad. Telling the truth is good. It’s what our parents told us, right up there with “eat your vegetables,” “brush your teeth,” and “make sure you unplug the soldering iron.” (What? I was raised by engineers). But there’s something else none of us can argue: We are all liars. According to a 2011 survey of Americans, we humans lie about 1.65 times a day. (Men lie a little more than woman, 1.93 lies to 1.39 lies a day.)

Perhaps this is why people got really excited in 2007 when Jeff Hancock, a communications professor at Cornell University, starting talking about how we could use computers and algorithms to help detect lies. His research pulled from a study he had done with two other professors, Catalina Toma and Nicole Ellison, about how people lie in online dating profiles. It turned out that nine out of 10 people fibbed when describing themselves to prospective mates. This fact may not be so surprising if we are honest with ourselves about our own dating lives. But Hancock went one step further. He began to develop a computer program that could detect the lies that people were telling online.

Advertisement

People have a terrible track record for picking out a lies—we can detect a lie about 54 percent of the time. Hancock’s algorithms, on the other hand, were able to establish patterns for how people told lies. One of the telltale signs that the programs look for is fewer words. Liars give less information when they describe events, people, and places. Those who are telling the truth, on the other hand, give more details. For instance, they talk about spatial relations, like how far a hotel was from a coffee shop or how long it took to get to the subway.

So that’s it, it’s the end of lying as we know it. With the help of computers and software, lying could become a thing of the past. And that scares the hell out of me.

The idea of technology delivering us from the shackles of vicious liars calls to mind Winston Smith in George Orwell’s 1984, especially this specific passage:

“A kilometer away the Ministry of Truth, his place of work, towered vast and white over the grimy landscape. … The Ministry of Truth—Minitrue in Newspeak—was startlingly different from any other object in sight.* It was an enormous pyramidal structure of glittering white concrete, soaring up, terrace after terrace, three hundred meters into the air. From where Winston stood it was just possible to read, picked out of its white face in elegant lettering, the three slogans of the Party:

Advertisement

WAR IS PEACE
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH.”

In the novel, Winston works for the Ministry of Truth, changing and destroying the past to keep the present in line with the current goals of the oppressive party. Orwell explains, “He who controls the past controls the future. He who controls the present controls the past.” The Ministry of Truth is an enormous apparatus for telling lies, for manipulating the past to serve the good of the ruling party. Could an algorithm start to act like the towering Ministry of Truth? If so, whom would it serve? Who defines truth? Using technology to control and police the truth in our communications with other people seems frighteningly dystopic. If all humans lie, then doing away with the ability to fib or fabricate might feel like doing away with a little piece of our very humanity.

Orwell’s vision was important because he was showing us a future that we should avoid. The future isn’t an accident. It’s made every day by the actions of people. Because of this, we need to ask ourselves: What is the future we want to live in? What kinds of futures do we want to avoid?

For the past 56 years, since Russia’s launch of Sputnik birthed the Space Age, we’ve imagined a very specific kind of future, one with sleek angles, shiny-clean homes, and good-looking people using amazing new devices. We’ve seen these images in movies, advertising, and corporate vision videos. As a futurist, I don’t like these visions of tomorrow. I find them intellectually dishonest. They lack imagination and fail to take into account that humans are complicated. In fact, the bright and chrome future is as scary to me as Orwell’s visions. To design a future that we all want to live in, a future for real people, we need to embrace our humanity and imagine it in this future landscape. To be specific, we need to embrace the fact that we are all liars—in certain ways.

Advertisement

“Not all lies are created equal,” my Intel colleague Dr. Tony Salvador, a trained experimental psychologist, told me recently. There are really two kinds of untruths. First, you have the bad lies, the ones we tell to actively deceive people for personal gain. These are the lies that hurt people and can send you to jail. At the other end of the spectrum are the white lies, the little lies we tell to just be nice—“social lubricant,” as Tony puts it. “It’s like when you bump into someone and say, ‘Oh, I’m sorry.’ You’re really not sorry, but you say it so you can both just move on. These kinds of lies just keep our days moving forward. They keep the friction down between people so that we can get done what we need to do in a world full of people.” You know, the kind of fibs that keep us humans from killing one another.

Between deception and comfort lies a vast expanse of bullshit. Bullshit isn’t lying. Princeton professor Harry Frankfurt explains in his book On Bullshit that the bullshitter’s intention is neither to report the truth nor to conceal it. It is to conceal his or her wishes. Bullshit can be the gray area between doing harm to someone (taking advantage) and making them feel better (white lies). It comes down to a question of intent. Are you bullshitting to be nice, or are you bullshitting to deceive and gain an advantage?

This Liars’ Landscape is helpful because it makes us examine how we could use technology to make people’s lives better while at the same time not making them less human. One misconception about technology is that it is somehow separate from us as human beings. But technology is simply a tool, a means to an end. A hammer becomes interesting when you use it to build a house. It’s what you can do with the tool that matters.

As we move into a future in which we have more devices and smarter algorithms, how do we design a future that can detect harmful deception while at the same time allowing us to be the lovely lying humans we all love? The first step is to imagine a more human future, with none of that metallic sheen. Perhaps, if we can get the technology right, there will be no deception, a little less bullshit, but just as many white lies.

Correction, Feb. 26, 2013: This article misquoted George Orwell's 1984. The Ministry of Truth is called Minitrue in Newspeak, not Miniture. (Return to the corrected sentence.)

Brian David Johnson is the futurist in residence at Arizona State University’s Center for Science and the Imagination and the director of the ASU Threatcasting Lab.