Future Tense

I Know What Will Cheer You Up

Emotion-detecting advertising is coming. Beware.

Sad person holding face with hands
If our gadgets and appliances can adjust themselves based on how we feel, what about the ads we see?

Photo by VikaValter/iStock/Thinkstock

One of the long-running complaints about the rise of digital intermediaries like Google and Facebook has been that, due to their unabashed enthusiasm for personalization, they lead to an ugly polarization of the public sphere. As citizens are shielded from opposing views, we risk spending our lives in what the writer Eli Pariser has dubbed “the filter bubble.” 

But Pariser’s bubble is only one of many on the horizon. For one, it has very specific technical origins: Until recently, the sensors involved in the personalization process could record our keystrokes and clicks but not our feelings. Even to call them “sensors” might be a bit of a stretch—they were more like cues. Thus, our browsing history could be used to predict what we might want to read next. Or our Google queries could be used to prioritize certain search results in the future.

But, invisible to most of us, a big structural change has happened in the last few years: The sensors responsible for personalization are no longer just textual—they can capture many other dimensions to our activities. They don’t just store the URLs and the search queries—they can also deal with data that is nonlinguistic, from neurophysiological indicators (are we burning too few calories?) to emotional ones (are we feeling anxious or aroused)?

Consider just two products that have recently been featured in the tech media: a car that slows down when it senses that you are not paying attention and a desk that tracks how many calories you burn and adjusts its height accordingly. True, the attention-powered car is just a test-run that requires the driver to wear a special helmet, but one can imagine how sensors built into the steering wheel might make the attention-detection smoother. (Toyota already toyed with such sensors back in 2011, while Ford has been experimenting with heart-rate monitors built right into the driver’s seat.)

The desk, on the other hand, is an actual product (albeit an expensive and exclusive one).  Equipped with thermal sensors and a Linux-power operating system, the desk knows when you are using it and constantly learns about your habits, eventually suggesting times to stand and sit. Unlike conventional static standing desks, it seeks to engage the user by “changing things up throughout the day, rising a inch or so, very gently.” Combined with data streams captured by all the other sensors in our lives, the desk turns a boring workstation into a health machine. The CEO of the company that manufactures the desk says they are even planning to “import outside data streams to make the desk smarter—like from fitness trackers. If the desk learns that you went from a 3-mile run before work, that will affect your activity profile and what the desk suggests for you that morning.”

If our gadgets and appliances can adjust themselves based on how we feel, what about the ads we see? The sudden plasticity of our physical environment might not pose many public policy concerns, but there is still wide scope for abuse.

Just a few weeks ago, I stumbled upon a recently published paper with a boring title but a fascinating plan: “CAVVA: Computational Affective Video-in-Video Advertising.” Written by three computer scientists in Singapore, the paper proposes an elegant method for inserting ads into videos based on close analysis of their emotional impact on the user. Based on an experiment, the researchers proved that their approach—which relies on a scene-by-scene analysis of the emotional content of the actual video that is being played—is more effective than relying on purely “textual” cues about relevance that services like YouTube use today.

The limitation of this method is that it selects which ads to show and when based on the emotional content of the video and not the emotional “content” of the user. So the next obvious step is to study what users feel in real-time. This can be done by studying our facial expressions while watching the video or measuring our pulse or tracking our eye movements.

Some startups are already exploring this lucrative terrain. Consider a recent report in the Wall Street Journal that briefly mentions MediaBrix, a company that specializes in “proprietary emotional targeting.” How does it work? Well, they study you when you are playing a computer game—and pitch you a product when you are most emotionally vulnerable. Of course, they don’t put it that way: Rather, the company helps to “reach game players at natural, critical points in game play where they are most receptive to brand messages.” With the proliferation of sensors into the built environment—whether done under the slogan of the “Internet of Things” or the “Smart City”—the scope for such “proprietary emotional targeting” would expand quite considerably.

This may have seemed unrealistic a decade ago but not today—not when Google has its own smart glasses while Apple has introduced M7, a powerful motion-sensing chip into the latest iPhone. (As Apple’s marketing chief said on introducing it, “it takes advantage of all these great sensors and it continually measures them” so that, even when in sleep mode, your iPhone call tell if a user is “stationary, running, walking, or driving.”) Google and Apple might be a bit late to the game: Last year Microsoft got a patent for “Targeting Advertisements Based on Emotion” (which mentions its Kinect motion-sensing device). Samsung has plenty of similar patents for technologies that range from facilitating the sharing of emotions over social networks to producing fragrances on mobile phones.

If the future of advertising lies in the processing of nonlinguistic traits, then whoever controls the sensory infrastructure for analyzing and monetizing them—the “emotion sharing apparatus,” as Samsung calls it in one its patents —will be the successor to today’s moguls of online advertising. For all the claims of inevitable virtualization, hardware—connected to screens, cameras, and data trackers—will only gain in importance, simply because it will allow to tap into real-time, dynamic, emotional data that is much better suited for advertising than textual cues that Internet giants have been gathering from our browsing, searching, and “friending.”

To say that our regulators—preoccupied as they are with addressing the privacy problems associated with the collection and storage of textual information—are ill-prepared to tackle the challenges of the nonlinguistic, emotion-based data would be an understatement. Techniques like “proprietary emotional targeting” present dilemmas that go far beyond just privacy concerns; in some sense, they finally substantiate the recurrent fears about “hidden persuaders” that have plagued advertising for decades.

Such fears didn’t seem very serious when everyone saw the same ads at the same time. They did not seem serious when Google and Facebook entered the game, as the ads were predictable and we could have blocked them. But the types of highly customized, emotions-based advertising that would become possible in a world where any “smart” touchable surface can guess how we feel and show us a relevant ad should make us reconsider. 

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.