Future Tense

What the Research Really Suggests About That Facebook Chatbot Therapist

597646590
Should you turn to Facebook in times of sadness?

DragonImages/iStock

Using social media can be a little like a free—albeit not very effective—therapy session. People share life events, complain about their problems and offer each another advice, along with lots of FOMO, rants, and vaguebooking.

So Woebot—a chatbot engaging in therapy-like services via Facebook Messenger—seems intriguing. The idea is to help you understand and monitor your moods using a combination of natural language processing and therapeutic expertise. Sounds good, right? Using A.I. via social media to significantly reduce psychological problems like anxiety and depression would be quite a breakthrough. But there are some major hurdles to overcome.

Like the rest of the health care sector all across the world, mental health treatment is in crisis. Therapy and counseling are incredibly labor-intensive, requiring multiple sessions with one expert per client over long periods of time to achieve even modest results. As such, this is an area ripe for Silicon Valley-style disruption—using technology to scale a competitor service to a bigger audience at a lower cost. In the past year, we’ve seen ample evidence of this happening. Ever since Facebook opened its Messenger platform to developers, there’s been an explosion of chatbots, and several of them are explicitly marketed as mental health tools.

Woebot, built by a Stanford team, is one of the first to be scrutinized under empirical research and peer review, and the results were published in the Journal of Medical Internet Research (Mental Health) on June 6. From a flyer posted online, researchers were able to recruit a convenience sample of about 70 participants, mostly white women. While it would be easy to criticize such a lopsided sample, it’s more important to note that at baseline, more than 75 percent scored in the severe range for anxiety symptoms. These people are vulnerable and in need of care and protection.

Participants were randomly assigned to either interacting with the bot (test condition) or were directed to self-help resources (control condition). Before beginning the treatment, Woebot first introduced participants in the test condition to the concepts of cognitive behavioral therapy, which is a type of psychotherapy that encourages clients to restructure their thinking patterns to try to improve their moods. Then Woebot gathered mood data by asking general questions and replied with appropriate empathetic responses. For example, if a participant expressed loneliness, Woebot would reply with something like, “I’m so sorry you’re feeling lonely. I guess we all feel a little lonely sometimes.” (Here’s an example of a Woebot interaction.) Its conversational style also included CBT techniques such as goal-setting and reflecting participants’ mood trends back to them. If a participant reported clinical-level problems like suicide or self-harm, he or she was directed to emergency helplines.

After about two weeks, participants once again completed measures of depressive/anxious symptoms, and positive/negative mood. The study’s lead author said that she was “blown away” by the data, but compared with baseline, no significant between-group differences were observed in terms of anxiety, positive mood, or negative mood. Only on reported depressive symptoms were any significant results achieved. In other words, being assigned to Woebot instead of self-help material made no difference to participants’ mood or anxiety levels.

To be frank, these results aren’t much to write home about. But at the same time, in these times of extraordinarily dysfunction in health care provision, any work that tries to alleviate mental health suffering should be welcomed, if cautiously. While Woebot might not be a cure-all right now, as the authors of the study say, for the 10 million U.S. college students suffering from anxiety and depression, it has the potential to become a useful mental health resource.

However, there’s another wrinkle here, one the study authors don’t mention in their write-up. Because Woebot is built on Messenger, participants’ data is shared not only with the Woebot operators, but with Facebook, too.

Facebook came under fire earlier this year when it was accused of helping advertisers target teenagers by their emotional state—an accusation it strenuously denied. In comment to Slate, Facebook confirmed that it does not offer tools to target ads to people based on their emotional state. Moreover, Facebook also said that it does not target any type of advertising based on the content of Messenger conversations. So, if you use Woebot, you should not receive targeted ads based on the deeply sensitive data you share with it, and hence Facebook. So far, so good.

However, Facebook could not confirm that it had no plans to do so in the future. Of course, Facebook never comments on future product developments, so this is unsurprising, and it doesn’t necessarily mean anything nefarious is in the works. But at the same time, this policy could change, and certainly Facebook has done so in the past. Note, for example, the $122 million fine thrown at Facebook by the European Commission for combining its data with data from WhatsApp—something that it said it “couldn’t” do when it first purchased WhatsApp.

Fundamentally, the social media industry is largely self-regulated, and as a result, so are therapeutic bots like Woebot. Even if they are minimally effective, the people using them are clearly vulnerable and deserve to have their most sensitive information secured indefinitely. Our mental health crisis is not going any time soon, and government and the tech industry have profound responsibilities here. As therapy and counseling are disrupted, we need to make this emerging field safe and secure for all of us.