In defense of the Samaritans Radar suicide prevention app.

The Misguided, Reactionary Campaign Against a Useful New App for Suicide Prevention

The Misguided, Reactionary Campaign Against a Useful New App for Suicide Prevention

The citizen’s guide to the future.
Nov. 4 2014 9:51 AM
FROM SLATE, NEW AMERICA, AND ASU

Bad Samaritans

Why is a helpful new suicide prevention app causing so much backlash?

141103_FUT_SuicidalTweets
Does suicide prevention app Samaritans Radar trample on ideals like autonomy and consent?

Photo by Thinkstock

The Samaritans Radar app, a seemingly innocuous Twitter program from a suicide prevention charity in the United Kingdom, has generated eye-watering quantities of backlash in the week since its launch. Why do so many people—particularly people who claim to have experience with mental illness—think it’s such a terrible idea?

Katy Waldman Katy Waldman

Katy Waldman is a Slate staff writer.

Here’s how the free service works: It scans your Twitter feed (so, only users you’ve elected to follow) for warning phrases like “I want to end it all,” “help me,” or “tired of being alone.” If any of this distressing language turns up, you receive an email alert flagging the worrisome tweet and can go from there. (Radar also offers guidance on how to reach out either to the tweeter or to a mental health professional.) Though the app can’t detect tone—making it not so good with irony and jokes—it allows subscribers to submit feedback on whether a given alarm was correct, and it learns from its mistakes. “In a perfect world you wouldn’t miss these tweets,” the Samaritans website acknowledges in soothing blue-green. We do not live in a perfect world. “Turn your social net into a safety net,” the site urges.

Initial coverage veered cautiously positive—mostly, writers noted that economic trauma had tugged suicide rates upward across Europe and that pitching a prevention effort at “digital natives” seemed wise. “I think this is a great idea, in theory at least,” wrote David Meyer at Gigaom. “As people gather round the virtual rather than physical water cooler, it’s easy to miss the sort of social cues that might suggest a friend or colleague is having a rough time.”

Advertisement

But over the next 24 hours, Meyer changed his tune. The approving piece loosed such a flood of dissent that an update was required, explaining that many people had criticized Radar “as invasive and counterproductive.” Meyer then penned a follow-up post corralling half a dozen distinct arguments made by other writers against the service. “I began by viewing the Samaritans Radar project positively,” he wrote in his second article, as a preface to the objections. “However, several of the excellent points that people have raised in the last day have changed my mind.”

Excellent points like what? There was an admonition that well-meaning strangers aren’t trained to handle those contemplating suicide. (No, but they can contact people who are.) And that the algorithm would dig up a lode of false positives. (Probably, but aren’t Type I errors here far preferable to Type II?) Another wave of protests hinged on the idea that Radar does a disservice to the depressed by forcing them to mask their sadness or get “found out.” “#samaritansradar makes sure that you are going to have to always present a sunny exterior,” one person tweeted. “At my most ill, something like #samaritansradar could have scared me off social media,” wrote another.

But this hardly makes sense: Don’t tweets like “I hate myself” and “I wish I were dead” exist so that you read them and respond? Those who just want to express their feelings in private, because putting words to thoughts is a relief, are unlikely to seek out an endlessly interactive global platform. Twitter is about the rush of community, and the “sunny exterior” criticism ignores that to tweet about suicide is to knowingly engage with that community—to offer up a textbook Cry for Help. These posts aren’t meant to skim under the radar, or Radar. (Still, for what it’s worth, Radar has a whitelist function for individuals who don’t wish their tweets to appear in the app’s timeline.)

Yet does the service trample on ideals like autonomy and consent? “Vulnerable people need to feel that they are calling the shots with regard to their own wellbeing,” wrote blogger Sarah Myles. “Bear Faced Lady” took a more extreme stance: “It’s a hard, sad fact that people kill themselves. That they genuinely see no place for themselves in the world, or the world they inhabit is just too damn hard. What is harder is to accept that it is their right to seek help or not.”

Advertisement

While I am sympathetic to the argument that depressed people should have some control over their lives and treatment, this “live and let die” approach strikes me as bananas. Cloaked in the language of rights and respect, it utterly fails to take into account that suicidal impulses are symptoms of a mental illness, not rational life choices. (Obviously, there are exceptions: I’m not talking about people wasting away from incurable and agonizing diseases, or martyred saints, or rituals like seppuku.) What’s more, Radar does not dispatch a V for Vendetta fantasy of jackbooted thugs to people’s doors after it uncovers a worrisome tweet. No one is being locked away or even blocked from accessing certain sites. The app merely shoots a notification email to a willing user who can then decide whether to contact the tweeter—about cats, or getting coffee, or, yes, maybe talking to someone. Such low-key, everyday gestures can make a huge difference.

Rights watchdogs also attest that Radar erodes people’s privacy by surveilling their tweets. Sure it does—and Twitter is a public space totally awash in privacy-eroding surveillance. Even in that sunlit arena, the blogger Sam Candour tried to articulate her uneasiness with the automated service by distinguishing between “bumping into a friend in the high street” (reaching out to your buddy because you’ve noticed some weird tweets) and “following that friend down the high street so you can engineer an encounter” (using Radar). Both street scenarios, though, are legal—and what if you have good reason for arranging the rendezvous? The words you speak are themselves organic and human, even if a plot or computer nudged everyone into position. (Plus, as a colleague pointed out, no one can watch Twitter 24/7. Perhaps Radar can help accent the late-night tweets a friend would normally miss.)

The only true downsides to Radar I can think of are practical, not ideological. The first relates to the potential for unwelcome intrusion if a user has innocent intentions, doesn’t know the tweeter well, and chooses to ping him anyway. (Perhaps Samaritans can devise a way to limit who is able to sign up for the service or view others’ contact information. Security questions? A threshold number of Twitter interactions?) The second, a worst-case scenario, is bleak: that the app, drawing attention to depressed tweets, might give ammunition to shaming jerks who persist in stigmatizing mental illness and harassing sufferers. Of course, it’s worth noting that both these things could happen anyway, simply because Person A saw Person B’s blip of despair on Twitter. Moreover, if we unplugged new technologies every time we realized they could be mishandled, someone would be singing this article to you over a bardic harp.

“You can show them [people struggling with suicidal thoughts] support. You can be a friend. You can tell them you care,” wrote Bear Faced Lady in her blog post. “You really don’t need an app for that.” The words sound good, but I can’t understand the logic by which the fourth statement is lumped in with the first three, or why having an app and being a supportive friend are mutually exclusive. Such instances of glib rhetoric (No, don’t tell depressed people you care!) obscure the issue: Are we willing to admit that an algorithm might pick up on things that, “in a perfect world,” we’d notice ourselves? Are we open to letting technology help us do better, though it means bringing the sensitive, vulnerable world of mental illness into proximity with the alien, impersonal world of code?

We don’t need a suicide prevention app in the way we don’t need Venmo, iPhone Recorder, or Uber. But we do need to be able to talk about psychological distress in ways that aren’t emotionally manipulative. When all the competitive white-knighting for people with depression calms down, what’s left is a computer program that analyzes a public trough of information for signs of suffering and then notifies a small pool of people who say they care. I have yet to hear a compelling reason why any of those facts should make us upset.

 This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.