Facial recognition software, targeted advertising: We love to call new technologies “creepy.”

Why Do We Love To Call New Technologies "Creepy"?

Why Do We Love To Call New Technologies "Creepy"?

The citizen’s guide to the future.
Aug. 22 2012 3:30 AM

Why Do We Love To Call New Technologies "Creepy"?

When we can’t find another way to explain our objections to facial-recognition software, for instance, creepy becomes a crutch.

(Continued from Page 1)

In Technology: Art, Fairground, and Theatre, Dutch philosopher Petran Kockelkoren notes that when people first started traveling by rail, new passengers experienced symptoms of “train sickness” that today we would deem bizarre. In the 1860s, for example, complaints of spinal damage caused by sitting on a train reached “epic proportions” in England, where some sufferers demanded compensation. The United States and Germany, too, hosted cases of train sickness. But decades later the epidemic died down and “disappeared from medical discourse, almost without a trace.” At the height of train sickness mania, people didn’t just complain of physical maladies (including reports of eye infections, diminished vision, miscarriages, urinary tract blockages, and hemorrhages). Psychiatric claims proliferated, too, including allegations that the train’s rapid movements produced “mental disturbances.”

The train example is but one of many relevant cases where negative feelings about technology convey bias. The history of conservative reactions to medical progress is especially informative. For example, after Christiaan Barnard performed the first human-to-human heart transplant at Groote Schuur Hospital in Cape Town, South Africa, on Dec. 3, 1967, critics expressed their uneasiness by condemning the surgery as a problematic case of doctors “playing God.” That anxiety has passed, and today the practice is widely accepted, while the enhancement debates—which might reflect very different sensibilities in 45 years—have shifted to extreme transhumanism, which may give us “animated and programmable LED tattoos connected to your brain” and a nanobot fueled “Enhancement Olympics.”   

But what about cases where feeling creeped out isn’t a reactive or entitled response, but actually signals something truly is off-base? Then, we confront two sticking points. The first that sometimes we’re not sure which issues should be discussed, and this makes it hard to determine why an argument motivated by creepy feelings should or shouldn’t be considered persuasive. When the Girls Around Me app—which displayed photos, names, and the physical location of users—was released, critics immediately condemned it for being creepy, connecting their concerns to privacy matters. It took a sociologist like Nathan Jurgenson to say that an important issue was being overlooked: sexism.   


The second sticking point is that creepy can concern possibilities rather than actualities. Calling something creepy can be a way of saying, “There’s no immediate problem, but I can foresee ways in which things might go wrong in the future.” Given the complexity and uncertainty involved with predicting the future in an ever-changing technological landscape, good arguments can be much harder to come by than using creepy as a rhetorically resonant way of placing a bet on forthcoming disaster.  

In instances where data collection creeps someone out, that person might think, “Given the history of compromised security, the ease by which information can move from one platform to another, and the vested interests in using personal information for control and profit, there’s good reason to keep a close eye on things.” This issue came up when critics depicted Target’s customized advertising as creepy after a store’s mailer filled with maternity items essentially identified a teenager as pregnant before her parents knew. Although some concern was directed at whether Target uses information in a transparent way, I suspect greater anxiety concerned a sense of unknown—an inability to determine what steps retailers “take to protect your identity or to minimize the accidental release of information.”

Likewise, the creepy stigma given to facial detection technology (which can register a viewer’s gender and age) being developed for the next generation of televisions largely arises out of concern that modification will fuel a new breed of tailored advertising. What we don’t know is exactly how the ads will be configured, how accurate the information scanning will be, and what kind of safeguards will protect consumers against having their information misused. It thus is easier to be worried than prudent.

Although appeals to creepy can focus us on concerns that deserve attention, we should be sensitive to the dangers of status quo bias, and wary of sensationalized shorthand that short-circuits difficult analysis. Objectors to new technologies need better reasoning than, “I don’t know why—it’s just creepy, all right!?!”

This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.