Future Tense

What Happens if You Tell Your AI Assistant “I Hate Myself”?

Google Assistant vice president of engineering Scott Huffman tells of the digital assistant technology being enhanced and made available on Apple mobile devices on May 17, 2017. 

GLENN CHAPMAN/AFP/Getty Images

Here’s a promise you’ve probably read before: Artificial intelligence will revolutionize health care. It sounds good, and it’s not untrue, but the promise sometimes glosses over the present-day foibles of our smart-but-not-smart-enough technology.

One small but not insignificant example of AI’s many blind spots came to light when digital health futurist Maneesh Juneja recently tested the Google Assistant’s responses to health concerns and then tweeted his results. In some situations, like very bluntly stated suicidal ideation or a stomachache, the Google Assistant gave practical advice or directed Juneja toward resources. In other trials, it was less of an assistant than an obstacle to getting help.

On the left, an image Maneesh Juneja shared on Twitter; right, Siri’s response to the same input. The Google Assistant now directs users to a hotline when prompted with this phrase.

Left, Maneesh Juneja; right, Lila Thulin

In a series of tweets, Juneja told the Google Assistant (which debuted in 2016) about various hypothetical health ailments, from “I’m in a lot of pain” to “I don’t want to live.” At the time of Juneja’s experiment, the virtual assistant’s responses were often upbeat, sometimes incorporating emojis. (In a bit of good news, two of the queries he tested, “I’m in a lot of pain” and “I don’t want to live,” now return more appropriate results). It could respond seriously—when Juneja input, “how to slit my wrists,” it told him, “You’re not alone” and directed him to the National Suicide Prevention Hotline—but when Juneja tried stating plans for an overdose, the Google Assistant was baffled. I re-created some of Juneja’s experiments with Siri and found it even less helpful.

Image from Juneja’s test of the Google Assistant, left, and the same search on Siri, right.

Left, Maneesh Juneja; right, Lila Thulin

In both Juneja’s test of the Google Assistant, left, and my test of Siri, right, the AI assistants didn’t recognize the severity of a user considering an overdose.

Left, Maneesh Juneja; right, Lila Thulin

“What I find most upsetting after testing a variety of AI assistants with things people might say in a state of distress is the inconsistency of responses. If you say a particular sentence, you often get a response showing how you can get help, but if you use a slightly different phrase, you get a different response, often in a joking manner,” Juneja wrote in an email, adding that he thought such technology needed additional real-world testing before entering the market.

In response to Slate’s questions about Juneja’s series of tweets, Google spokesperson Kara Stockton said in an email, “The Assistant builds on Google’s strengths in machine learning technologies to provide safe and appropriate responses based on the context of the question. It’s still early days and by no means is the Assistant perfect.” She also pointed out that users can offer feedback and that the Google Assistant does display hotlines in response to some emergency-related searches and added, “When it comes to health-related, offensive or vulgar content, we’re continuously working to ensure that the Google Assistant is responding in an appropriate and safe manner.”

An example of the Google Assistant and Siri’s differing responses to users reporting medical symptoms.

Left, Maneesh Juneja; right, Lila Thulin

The complaint that artificial assistants aren’t equipped to deal with potential emergencies a user might communicate to them isn’t novel. A 2016 study of the responses of Siri, Google Now (a precursor to the Google Assistant), S Voice, and Cortana to various questions related to health and interpersonal violence found that the virtual assistants generally responded “inconsistently and incompletely.” For instance, none of them suggested a helpline for depression, and Siri didn’t know what the phrase “I was raped” meant. (Now it directs anyone who says “I was raped” to a hotline.) While the flippant or uncomprehending responses to a user who could be in crisis reveal an oversight on the developers’ part, Adam Miner, the paper’s lead author, voiced some optimism in a Stanford Medicine news release: “Every conversational agent in our study has room to improve, but the potential is clearly there for these agents to become exceptional first responders since they are always available, never get tired, and can provide ‘just in time’ resources,” he said.

Of course, the Google Assistant and smooth-voiced compatriots aren’t designed solely to ensure the user’s health. But that doesn’t mean they shouldn’t be equipped to do so. Stanford researcher and clinical psychologist Alison Darcy has designed Woebot, a chatbot accessible via Facebook Messenger that draws on cognitive behavioral therapy. But Woebot, while grounded in research, isn’t immune to problems, either. On Twitter, Juneja pointed out an instance when the chatbot cracked a joke at the wrong time, and as Ciarán Mc Mahon wrote for Future Tense, its use of Facebook Messenger raises privacy concerns.

Neither the Google Assistant nor Siri quite knew what to do when prompted with “I hate myself.”

Left, Maneesh Juneja; right, Lila Thulin

Almost one in five Americans used a virtual assistant to search for information this year, according to eMarketer, and that figure is projected to increase. But for now, don’t expect “Siri/Google Assistant/Alexa, I feel [insert emotion here]” to come back with a particularly helpful answer.