Future Tense

Sex, Violence, and Autocomplete Algorithms

What words do Bing and Google censor from their suggestions?

78746045

There go your plans to have Google help you out with your search for boob-related things.

Fuse

Warning: This article contains explicit language.

Autocomplete is one of those modern marvels of real-time search technology that almost feels like it’s reading your mind. Thanks to analyzing and mining what millions of other users have already searched for and clicked on, Google knows that when you start typing a query with a “d,” you’re most likely looking for a dictionary. Besides the efficiency gains of not having to type as much, suggestions can be serendipitous and educational, spurring alternative query ideas. In the process our search behavior is subtly influenced by exposure to query possibilities we may not have considered if left to ourselves.

So what happens when unsavory things, perhaps naughty or even illegal, creep into those suggestions? As a society we probably don’t want to make it easier for pedophiles to find pictures of naked children or to goad the violently predisposed with new ideas for abuse. Such suggestions get blocked and filtered—censored—for their potential to influence us.

As Google writes in its autocomplete FAQ, “we exclude a narrow class of search queries related to pornography, violence, hate speech, and copyright infringement.” Bing, on the other hand, makes sure to “filter spam” as well as to “detect adult or offensive content,” according to a recent post on the Bing blog. Such human choices set the stage for broadly specifying what types of things get censored, despite Google’s claims that autocompletions are, for the most part, “algorithmically determined … without any human intervention.”

What exactly are the boundaries and editorial criteria of that censorship, and how do they differ among search engines? More importantly, what kinds of mistakes do these algorithms make in applying their editorial criteria? To answer these questions, I automatically gathered autosuggest results from hundreds of queries related to sex and violence in an effort to find those that are surprising or deviant. (See my blog for the methodological detail.) The results aren’t always pretty.

Illicit sex

Armed with a list of 110 sex-related words, gathered from the linguistic extremes of both academic linguists and that tome of slang the Urban Dictionary, I first sought to understand which words resulted in zero suggestions (which likely means the word is blocked). In the following diagram, you can see words blocked only by Google or Bing, and by both or neither. For example, both algorithms think “prostitute” is just dandy, suggesting options for prostitute “phone numbers” or “websites.” They’re not about sexual deprivation: Bing is happy to complete searches for “masturbate” and “hand job.” Conspicuously, Bing does block query suggestions for “homosexual,” raising the question: Is there such a thing as a gay-friendly search engine? In response, a Microsoft spokesperson commented that, “Sometimes seemingly benign queries can lead to adult content,” and consequently are filtered from autosuggest. By that logic, it would seem that “homosexual” merely leads to “too much” adult content, causing the algorithm to flag and filter it.

Initially it would appear Google is stricter, blocking more sex-related words than Bing. But really they just have different strategies. Instead of outright blocking all suggestions for “dick” as Google does, Bing will just scrub the suggestions so you only see the clean ones, like “dick’s sporting goods.” Sometimes Bing will rewrite the query, pretending a dirty word was a typo instead. For instance, querying for “fingering” leads to wholesome dinner suggestions for “fingerling potato recipes,” and searching for “jizz” offers suggestions on “jazz,” for the musically minded searcher, of course. Both algorithms are pretty good about letting through more clinical terminology, such as “vaginas,” “nipples,” or “penises.”

For something like child pornography, the legal stakes get much higher. According to Ian Brown and Christopher Marsden in their book Regulating Code, “Many governments impose some censorship in their jurisdiction according to content that is illegal under national laws.” So it’s not entirely surprising that, in order to head off more direct government intervention, corporations like Google and Microsoft self-regulate by trying to scrub their autocomplete results clean of suggestions that lead to child pornography.

As shown in the next figure, both algorithms do get much stricter when you add “child” before the search term. Bing blocks “child nipple,” for instance. But there are some conspicuous failures as well. While you might think it wry that Google and Bing suggest completions for “prostitute,” the fact that Google also offers completions of “child prostitute” for “images” or “movies” is far more alarming. Moreover, searching for “child genital” or “child lover” on Google or Bing, as well as “child lust” on Google, all lead to disturbing suggestions that relate to child pornography. Querying “child lover,” for instance, offers suggestions for “child lover pics,” “child lover guide,” and “child lover chat.” Given Google and Microsoft’s available technology and resources, and combined with their ostensible commitment, it’s hard to believe that these types of errors slipped through the cracks.

A Google representative acknowledged that the company does sometimes miss things but says that it’s an active and iterative process to improve the algorithm and filter out shocking or offensive suggestions. A committee meets periodically to review complaints and suggest changes to the engineering team, which then works to tweak, tune, and bake that into the next version of the algorithm. With hundreds of updates per year, the algorithm is constantly changing—perhaps even by the time you read this article. A Microsoft rep reached for comment indicated that the people behind Bing are likewise continually improving their algorithmic filters and that if suggestions that relate to child pornography are brought to their attention, they’ll remove them.

Promoting violence?

Another editorial rule that Google incorporates into its autocomplete algorithm is to exclude suggestions that promote violence. To test its boundaries, I collected and analyzed autocomplete responses for a list of 348 verbs in the Random House “violent actions” word menu, which includes words like “brutalize” and “choke.” In particular I queried using the templates “How to X” and “How can I X” in order to find instances where the algorithm was steering users toward knowledge of how to act violently.

As a reflection of what people are searching for, it’s perhaps a commentary on the content of video games that many of the suggestions for violent actions were about things like how to beat a boss in a particular game. Certain queries, like “how to molest” or “how to brutalize,” were blocked as expected, but other searches did evoke suggestions about how to accomplish violence toward people or animals.

Among the more gruesome suggestions that were not blocked: “how to dismember a human body,” “how to rape a man/child/people/woman,” and “how do I scalp a person.” Some suggestions were oriented toward animal cruelty, like “how to poison a cat,” and “how to strangle a dog.” Despite any annoyance you might have with the neighbor’s barking dog, that still doesn’t make it morally permissible to strangle it—such suggestions should also be blocked.

Algorithmic governance, meet algorithmic accountability

The queries that are prohibited, like Bing’s bizarre obstruction of completions for “homosexual,” are sometimes as surprising as the things not blocked, such as the various suggestions leading to child pornography or explicit violence. As we look to algorithms to enforce morality, we need to acknowledge that they too are not perfect. And I don’t think we can ever expect them to be—filtering algorithms will always have some error margin where they let through things we might still find objectionable. But with some vigilance, we can hold such algorithms accountable and better understand the underlying human (and corporate) criteria that drive such algorithms’ moralizing.

The editorial criteria that Google and Bing embed in their algorithms tacitly reflect company values and a willingness to self-regulate in order to protect people from socially deviant suggestions. Yet this self-regulation is largely opaque, making if difficult to understand how these mostly automated systems make the decisions they do. In the absence of corporate transparency, and as more aspects of society become algorithmically driven, reverse-engineering such algorithms using data and algorithms offers one potential way to systematically penetrate that opacity and recreate an, albeit low-resolution, semblance of how everything works.

This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.