Future Tense

So Sayeth Google

The search engine should not be the arbiter of truth.

google vaccines.

Gatekeeper as well as gateway?

Photo illustration by Slate. Photo by Thinkstock. Screenshot via Google.

Search for “vaccines.” At least within our filter bubble, the top item in Google’s “In the news” section earlier this week was an anti-vax column about the “feds’ plan to force vaccines on adults.” The first of the regular search results was the pro-vaccination page of the CDC, the Wikipedia page, and then vaccines.gov, which is also in favor of childhood immunizations. The fourth gives both the pros and the “cons,” weighting them equally. The fifth was a Mercola site that is resolutely anti-vax and that is loaded up with “facts” such as “Kids given vaccines have 22 times the rate of ear infections,” not to mention the always popular “and 19 times higher odds of Autism.”

So, the top results at Google include lies, junk science, and exhortations for parents to engage in behavior that is likely to sicken or even kill children infected by preventable diseases. It’s a disservice—to put it mildly—that Google’s top results make it look like there is a serious argument about the connection between vaccines and autism.

Over the past couple of weeks, there’s been a flurry of Net chatter about the possibility that Google might stop ranking results based on an algorithm that favors popularity, and instead give a strong preference to posts that are actually factually correct.

Wait. Does this mean Google intends to be the arbiter of truth? Even if you don’t worry that Google is already too influential, that would raise red flags.

Don’t panic. We checked with Google, and that change doesn’t sound anything close to imminent. But we actually think the concept could make some sense … if done carefully and appropriately, and especially if there’s competition in the search arena.

The mini-brouhaha apparently began with a nuanced paper by a Google research team that wondered what it would take to algorithmically estimate “how trustworthy a given web source is.” An article about the report in New Scientist carried the headline “Google wants to rank websites based on facts not links.” And then social media went to the races, carrying us off into a fun round of speculation, anticipation, and worry.

Taking science as an example, this is what we want from search:

  •  to provide the best answers science currently provides.
  •  not to feature bad science (such as anti-vax sentiment) as if it were equivalent to good science.
  •  not to marginalize work that has not yet been recognized for its scientific merit.
  •  to provide access to controversies within the scientific community.
  •  not to extend controversies that claim to be scientific but in fact are not.
  •  to present us with the truth without being the decider of truth

The fact that this list of demands is self-contradictory, not to mention difficult, indicates the knottiness of the problem.

We asked Richard Gingras, head of Google News (among other duties), about this. Certainly some things are true or false, Gingras responded. But, he added, to suggest that everything is binary—absolutely true or absolutely false—is to ignore perception and, in many cases, reality. (The black-blue-white-gold dress, anyone?)

Google’s main job in search is to “recommend the best possible sources based on all the signals we have,” Gingras said. But reducing complex issues to a singular truth? That’s not on his to-do list.

Yet not ranking search results based at least in part on their factual accuracy isn’t a neutral act. This is especially important because the Internet often strips the context of authority from ideas, at which point they can all look equally plausible. This has some remarkably good effects—for example, ideas can be examined on their own merits, unprejudiced by the degree or lack thereof of the person who proposes them—but it also enables bad ideas to look equivalent to good ideas, with potentially fatal consequences.

The search engines’ failure to differentiate the accurate from the dangerously inaccurate is one of the most significant causes of this appearance of equivalency. Google’s proposed new approach is therefore a responsible thing to do.

Of course, figuring out what’s worthy of belief isn’t easy. In fact, it’s one of civilization’s basic and continuing tasks. We’re not there yet, despite the efforts of people who prefer to live in an evidence-based society. And we’ll never have perfect knowledge. Still, we should applaud any progress Google and other search engines can make in undoing some of the pernicious effects of the Internet.

But, no, Google absolutely should not become the arbiter of truth. That’s dangerous in every kind of way: Google is not smarter than the experts in science and other disciplines who are engaged in continuous, well-founded, evidence-based arguments. No one—not even Google—wants Google to step in and settle hash that scientists themselves can’t fully.

But the search engines don’t have to become the arbiters of truth in order to get more fact-based sites high up in the search results. They can take a page from Wikipedia, which from the beginning has struggled with this very issue.

For example, a few years ago, historian Timothy Messer-Kruse found new material indicating that the standard account of the Haymarket Affair trial was significantly wrong. Wikipedia editors kept removing his well-intentioned modifications of the Haymarket Affair article even though he was likely right, because it isn’t up to Wikipedia to decide controversies within disciplines. Once the established arbiters of the discipline accepted the researcher’s work—perhaps by subjecting it to some degree of scrutiny after it had been published in a peer-reviewed journal—the Wikipedia article would be updated. (See the Wikipedia article’s talk page for a useful list of coverage of this controversy.)

This seems to us to be the right approach. We don’t want Wikipedians doing the work of historians, even if those Wikipedians are historians. We want historians to use their established methods for doing history. Those methods should continue to evolve in the age of the Internet, of course.

In the same way, we don’t want Google making decisions about what is good science or bad science. We want Google to do as careful a job as it can reflecting what the scientific community thinks, in aggregate and based on the best evidence.

The scientific method looks neat and clean in some ways, but the scientific community has fuzzy boundaries about which it will never stop arguing. Its marks of authority are likewise always in debate. So any approximation Google makes will be, well, approximate.

And that’s why we hope Google will pursue at least part of what its researchers have proposed. If it does, the company will no longer be able to say, Hey, we’re just reflecting what you bunch of idiots believe! It will be taking more responsibility.

We’ll all have some responsibility, too—not just to complain (properly) when Google gets it wrong, but to recognize and appreciate the nuance and complexity that are part of almost everything. In other words, we’re all in this together.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.