Science

Flagging Fake News or Bad Sources Won’t Work

The idea—propped up by media organizations and Facebook itself—doesn’t account for how the brain processes information.

Photo illustration by Slate. Photos by Jupiterimages/Thinkstock, Nguyen Huy Kham/Reuters.

As you know from the real news, fake news is a big problem these days. We’ll never know exactly how much effect it had on the election. Mark Zuckerberg initially dismissed the suggestion as a “pretty crazy idea.” But in the weeks since, both Zuckerberg and several news organizations have come around to the idea that people should be warned about the legitimacy of news posts’ sources by flagging fake stories with warnings.

Zuckerberg wrote about this in a Nov. 18 post, saying, “We are exploring labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them.” The Facebook Journalism Project, announced on Tuesday, goes further: Facebook will “be working on new ways to help give people information so they can make smart choices about the news they read.” The company also launched “improvements on our platform to further reduce the spread of news hoaxes—including ways for people to report them more easily.” Many news organizations have already created extensions that allow people to flag fake news—Slate has one called This Is Fake.

It seems like a sensible idea on its surface. But decades of research on human memory indicate that warnings will be no panacea for the fake news problem. When you read (or hear) a news story—and especially if you don’t give it your full attention—you don’t retain every single detail. You get the gist of the story. You remember the main idea but forget the details, even as you are reading the story. (For example, you may have already forgotten what Zuckerberg originally said about fake news.) You also tend to forget contextual information bearing on the credibility of the story, including its source—whether you read it in Slate; the New York Times; Huffington Post; or the Denver Guardian, a fake news source that published the now-infamous fake story about the pope endorsing Donald Trump.

As it happens, another contextual detail you may forget is whether the story was flagged as a fake. A 2005 study published in the Journal of Consumer Research offers one example of why this approach will likely fail. In the study, people read statements that were flagged as true or false, such as, “Corn chips contain twice as much fat as potato chips” (false). Then, after delays of 30 minutes and three days, they saw the statements again, and indicated whether each was true or false. After three days, people remembered 26 percent of the false statements as true (and 15 percent of the true statements as false). Flagging fake news with warnings could be useful—it’s just easy for people to forget the warnings.

To make matters worse, the more we are exposed to false information, the more we tend to believe it is true—a phenomenon known as the truth illusion. The University of Toronto psychologist Lynn Hasher and her colleagues first reported this finding in the late 1970s. On three different days, subjects listened to 60 plausible-sounding statements and rated each statement on whether they thought it was true. Half of the statements were true—such as, “Ernest Hemingway received a Pulitzer Prize for The Old Man and the Sea”—and half were false—such as, “Zachary Taylor was the first president to die in office.” Also, some of the statements were repeated across the testing sessions. Hasher and colleagues found that the average truth rating increased from session to session for the repeated statements but stayed the same for the nonrepeated statements, regardless of whether they were true. The more the subjects saw a statement, the more convinced he or she became that it was true. We tend to mistake familiarity for verity. Of course, this is particularly problematic with a platform such as Facebook, where the stories that go viral often appear in your feed again and again and again.

Exposure to fake news could even lead us to “remember” things that never happened. In 2010, as part of a study on false memory, more than 5,000 Slate readers read four news stories about political events, each accompanied by a photograph. Though all of the stories were portrayed as real, one story was a total fabrication and included a convincingly Photoshopped image of the event. For example, one fake reported on President Barack Obama shaking hands with Iranian President Mahmoud Ahmadinejad at a United Nations conference; another described President George W. Bush entertaining baseball player Roger Clemens at Bush’s ranch during the Hurricane Katrina crisis. The results were startling: More than a quarter of the respondents indicated not only that they remembered the false event happening, but that they remembered seeing it on the news. Other research has shown that people are vulnerable to false memories even when they are explicitly warned that they may be exposed to misinformation.  

In short, from everything we know about how memory works—and unfortunately, it’s less intuitive than we’d prefer—labeling fake news as fake will likely not be enough to solve the problem. Of course, many of the fake news-flagging tools have loftier ambitions—most critically to stop fake news from showing up in your Facebook feed in the first place. And Facebook’s own statement says it will continue “efforts to curb news hoaxes.” But it also seemed intent on avoiding responsibility, stating, “This problem is much bigger than any one platform.”

If we’re going to take a research-based approach to the fake news problem, there’s another way that already has a good track record of working when applied to other issues: Use public service announcements to encourage people to get their news from reputable sources. Research shows that PSAs, although fodder for late-night comedy, can be highly effective in changing people’s actions. Research by noted social psychologist Robert Cialdini has established that PSAs work when they capitalize on the fact that people tend to behave in ways that are both socially approved and popular. For example, in field studies, Cialdini and his colleagues have shown that pro-recycling PSAs work best when they portray recycling as something that is both approved of and common, and anti-littering PSAs work best when littering is portrayed as something that is both disapproved of and uncommon. Apparently, Facebook itself is planning to start using PSAs to inform people about the importance of news literacy. (Per the announcement: “In the short-term, we are working with the News Literacy Project to produce a series of public service ads [PSAs] to help inform people on Facebook about this important issue.”)

Of course, the PSA that we really need is one telling people to stop getting their news from Facebook in the first place. Such a PSA could depict a person bringing up a news story that he or she read on Facebook and then being criticized by a group of friends for getting news from Facebook. Sure, it sounds a little hokey, but PSAs almost always are, and, as Cialdini’s research shows, they can work.

Warning people about fake news may feel like a good idea, but it’s not likely to have much of an effect on people’s actual behavior. The correct approach—and one with a potential to affect behavior—might be to use PSAs to dissuade people from getting their news from Facebook in the first place.