Future Tense

This Computer Program Says It Can Decode Your Emotions by Reading Your Emails. Is It Right?

IBM’s Watson computing system attends a 2011 press conference in Yorktown Heights, New York.

Photo by Ben Hider/Getty Images

IBM Watson—AI extraordinaire, Jeopardy world champion, student of hedonic psychophysics—may not have the warm corporality of his crime-solving namesake, but he’s working to acquire the social intuition. Last week the computing company rolled out its Tone Analyzer tool, which harnesses “cloud-based linguistic analysis” to decode the feels roiling beneath your email correspondence or any other text you want to input. The program interprets the writing sample on three levels: emotional tone (angry, cheerful, or negative); social tone (agreeable, conscientious, or open); and writing tone (analytical, confident, or tentative). It assigns to every word it recognizes a color based on that word’s affective tenor. If you click on a particular word, Watson offers up synonyms that might increase agreeability, openness, conscientiousness, or cheer. Meanwhile, a rainbow-hued bar at the top of the page tells you what percentage of the sample language contributes to the overall emotion, social persona, or writerly disposition.

I can’t predict how useful the Tone Analyzer will prove in a business setting—I’d guess that only a small number of managers don’t realize whence the vitriol comes in a sentence like Your presentation was a disaster—but it’s fun to play with. You can reverse-engineer Watson’s color-coded verdicts, using words like punish or stupid to envelop your text in an angry red, or opting for super-duper exciting to sound pink and cheerful. Unpleasant words—worry, fail, decay—boost your negativity score, while neutral nouns and adjectives (project, lunch, timely) weirdly get an “agreeable” label, “conscientiousness” is mostly measured in conjunctions and other syntactical helpmates, and “open” words are … to be honest, I’m not sure. (They include this, away, and murdered.) There’s the “analytical” category, which latches onto thinking verbs like wonder and decide, and the “confident” one, which encompasses emphatic descriptors like any and exactly, and the “tentative” one, which hedges with terms like some and maybe. It all seems a bit scattershot—either Watson’s cloud-based exegesis has a few kinks to work out, or it runs on logical rails too baroque and ethereal for this lowly meat sack. Oh well. I was pleased, at least, to feed the program some work emails and learn that my colleagues and I are all, in Watson’s estimation, agreeable mensches. “You’re no Sherlock, but I like you,” I typed into the feedbox afterward. It replied that I was cheerful and conscientious.

The Tone Analyzer’s a tool, not an English professor, so unsurprisingly it feels less suited to revealing all the emotional subtleties in a piece of writing and more helpful as a kind of spellcheck for being an asshole. Wondering how to make your memo to staff sound less angry? Watson will trace that nebulous rage vibe to a few problem words and suggest gentler replacements. Hoping to strike the perfect chord of confidence and humility in your cover letter? Watson will ferret out your overweening nevers, your diffident sort ofs. True, homographs occasionally baffle the supercomputer. I served it one of the ghastliest passages I could think of from Cormac McCarthy’s The Road—“People sitting on the sidewalk in the dawn half immolate smoking in their clothes. Like failed sectarian suicides … The screams of the murdered. By day the dead impaled on spikes”—and it approved of the happy word like. So too context: I told it I was “obsessed” with hound dogs and it chided me for negativity (probably picturing an anguished Bassett-stalking scenario). Also, I submitted the last page of The Great Gatsby, one of the most emotionally soaring blocks of prose-poetry ever written in English, and Watson gave it a 0 percent emotion tone. “Let’s agree to disagree!” I wrote. “Differ,” the computer corrected gently.

Watson-baiting will only get you so far. By the time I was inputting, at various co-workers’ suggestions, passages from Fifty Shades of Grey and Naked Lunch, the novelty of the exercise had worn off. (Fun fact: Watson prefers peter to penis.) Agreeability, conscientiousness, and anger are just not very revelatory dimensions along which to assess most pieces of writing, it turns out. That’s because, in an ideal world, all office communications sound vaguely alike: congenial, competent, engaged, and helpful. But beyond the cubicle, so much of our language use expresses singularity rather than convention, treading into other affective realms entirely.

That’s obvious, as is the maxim that there’s no science—no specific goals, no rules, and certainly no shortcuts—to conjuring emotions out of articulated noise. Yet sentiment analysis continues to entrance linguists and computer developers. In the early aughts, Eudora’s email service came with an automated function that assessed the various feelings reflected in each message. Like the Tone Analyzer, the software was rudimentary and easily misled. (Jokes circulated about a math teaching assistant who got dinged for negativity after repeatedly referencing his students’ “problems.”) Academic studies also make use of “opinion mining” computer programs to “identify and extract subjective information from source materials.” The Cyberemotions project, from 2013, for instance, tried to understand how angry-, happy-, or sadness-tinged language drove the formation of online communities. The new iteration of sentiment analysis with IBM raises the question: Why do we keep tilting at this particular windmill?

I’d argue that people interested in artificial intelligence might also be interested in the proposition that the consciousness embedded in and delivered by a passage of writing can be broken down into discrete, understandable parts. Sentiment analysis enacts the mind-body problem, but for texts. Is the tone of a sentence some eerie, soul-like emergent property, or just a sum of processes you can ask a computer to model? I actually posed that question to Watson and was unsurprised when he told me I sounded “tentative.” The human race gets the last laugh, however. He didn’t even recognize the word “computer.”