Future Tense

Reading the Mind at Risk

A group of researchers think a combination of fMRI scans and machine learning could help identify patients with suicidal ideation.

A High Quality T3 fMRI scan of my brain produced using the University of Birminghams Medical school fMRI machine from 2012
A T3 fMRI scan of the brain produced in 2012. Photo illustration by Natalie Matthews-Ramo. Photo by DrOONeil/Wikipedia.

It can sometimes be difficult to recognize suicidal thoughts, even in those we know best. Many of the most familiar warning signs—feeling trapped, reckless behavior, depression—can be indicators of other mental states, from the relatively benign to the clinically severe. Psychologists and clinicians might benefit from more precisely attuned diagnostic tools, ones that would allow them to spot the patients at highest risk.

In a paper published this week in the journal Nature Human Behavior, researchers suggest that they may have developed just such an approach—a new tech-infused way of identifying individuals with suicidal ideation. Their method involves a combination of functional magnetic resonance imaging, or fMRI, brains scans, and machine-learning algorithms. While the authors suggest this approach might have far reaching effects for psychiatric diagnosis and treatment, their work still suggests reasons for caution.

Relatively small in scope, the study compares two test groups of 17. The first was composed of individuals who had expressed suicidal ideation in the past, while the second was composed of “healthy controls.” Participants were introduced into an fMRI machine and asked to reflect on a series of 30 words. According to the paper, one-third of those words were “positive” (bliss, comfort, good), while another third were “negative” (gloom, guilty, terrible). A final set of 10 were more directly associated with suicide (death, fatal, funeral).

The researchers analyzed the scans of subjects’ brains while they were thinking about these words, separating out results from the six words that most clearly suggested a difference between the two participant groups: death, cruelty, trouble, carefree, good, and praise. They then trained a machine-learning algorithm on this data, effectively telling the program which types of responses were correlated with suicidality, and which were not. When they then showed individuals—as opposed to the larger group—to the algorithm again, it was able to determine, with 91 percent accuracy, whether the participant came from the suicidal ideation group or from the control group.

In their paper, the researchers are careful to stress that this methodology isn’t meant to function as a standalone diagnostic tool. They suggest instead that “clinical assessment of suicidal risk would be substantially complemented by a biologically based measure.” Marcel Just, a psychology professor at Carnegie Mellon University and the study’s lead author, reaffirmed that point when I called him on Tuesday afternoon. “This isn’t going to replace behavioral psychiatry,” he said. “But I can imagine a patient who doesn’t say anything to their therapist, but they happen to have a scan, and that scan indicates suicidal ideation. Wouldn’t the therapist like to know that?”

The possibility of patients who are at risk because they fail to speak up is central to the paper’s framing. Its authors note, “Nearly 80% of patients who die by suicide deny suicidal ideation in their last contact with a mental healthcare professional.” There, however, they point to a 2003 study of relatively limited scope, one that involved studying the charts of “76 patients who committed suicide while in the hospital, or immediately after discharge.” Of those patients, 49 percent had a history of suicide attempts, and 25 percent had been admitted for a suicide attempt. It is difficult to know how these numbers would map onto the larger population. That 2003 study comes to the conclusion that clinicians should add “severity of anxiety and agitation to our current assessments.” More generally, though, it likely remains important for clinicians to keep asking their patients about suicidal thoughts, even if those in their care are reluctant to discuss the topic.

The researchers also suggest that their findings could have other applications apart from diagnosis. They show, for example, that they were able to map the data generated by their six words onto “four previously acquired emotion signatures”—representations of the brain when it is feeling sadness, shame, anger, and pride. “It provides a potential target for therapy,” Just told me. “If you know there’s an excess of sadness associated with death, you can possibly design a therapy that brings that alteration into check and reduces the excess sadness. … Furthermore, you could scan the person again and tell whether your therapy has worked or not.” While this observation is promising, it also likely doesn’t come as a surprise to learn that those experiencing suicidal ideation feel sadness, shame, or anger.

Moreover, many have raised concerns around the validity of fMRI findings more generally. As Wired notes in an article on the new Nature paper, a variety of issues haunt these experiments: “Just because two things occur at the same time doesn’t prove one causes the other. And then there’s the whole taint of tautology to worry about; scientists decide certain parts of the brain do certain things, then when they observe a hand-picked set of triggers lighting them up, boom, confirmation.” When we think of a word such as “anger,” for example, it may be difficult to dissociate the processes of thinking about the word itself from the feelings it evokes.

Marcel Just was undeterred when I brought up these concerns, telling me, “As the progress has moved forward, everything that anybody was skeptical about, that skepticism has been removed. The evidence is firmer and firmer and firmer. It’s very clear that fMRI truly measures brain activity.” That said, some do remain skeptical. Many of those who are dubious about the method bring up a 2010 experiment in which a dead Atlantic salmon seemed to demonstrate neural activity when “shown” pictures of humans and “asked” to reflect on their emotional states. While that prank confronted a specific methodological problem with some studies (the absence of a technique known as “multiple comparisons correction”) it still speaks to ongoing hesitations about embracing fMRI tests outright, thanks to software bugs and other problems.

Another issue that has arisen in both Wired and the Verge’s coverage is the question of sample size. Are 34 subjects enough to constitute a definitive model for subsequent tests? Just recognizes that it might not be, telling me, “Surely it would be desirable to run this on a substantially larger group, maybe one to 200.” And yet, he still thinks that the information is solid: “On the other hand, how many swallows do you have to dissect to tell how many kidneys a swallow has? It really depends on the stability of the phenomena. … If it works so well in a small group, it’s likely to work just as well on a larger group, so long as the larger group is very similar to the smaller group.”

There is, however, arguably reason to be cautious about the proverbial swallow guts on display in Just’s work. As he and his co-authors note in their paper, they actually began with a much larger collection of test subjects. They initially scanned 38 individuals with suicidal ideation, and subsequently excluded 21 of them from their analysis “because of the lower technical quality of their data.” As Just explained it to me, they set those subjects aside because the classification software was unable to tell which of the 30 words those subjects were thinking about when the researchers presented their data to it. Notably, however, when the researchers later fed the data from the excluded patients to the algorithm it had trained on the included group, it was able to determine that they belonged to the suicidal ideation group 87 percent of the time. Just chalks this up as a success for fMRI.

Then there’s the question of what these findings really show. As the authors note, “Another limitation is that the current study does not provide a contrast between suicidal ideator and psychiatric control participants who are affected by psychopathology in general.” To put that more simply, the results might not be specific to suicidal ideation: It’s possible, for example, that the brains of those with clinical depression might light up in similar ways. By way of evidence that their findings were specific to suicide, they note that the algorithm was also able to distinguish those who had made a suicide attempt from those who hadn’t with 94 percent frequency. Here, however, the issue of sample size becomes even more pertinent, since it was only drawing on a group of 17.

As even Just acknowledges, this study constitutes an initial step at best. It remains to be seen, for example, whether the results are replicable. Just also hopes that it may be possible to develop a less cumbersome protocol, not least of all because fMRI tests can be enormously expensive. To that end, he says, he’s currently working to “bootstrap” fMRI findings to EEG results, which could potentially provide a cheaper way to identify similar warning signs.

If the study’s findings do hold in future testing—and if researchers find other ways to employ them—this work might open new pathways of diagnosis and treatment. For the time being, though, brain scans and A.I. analysis are unlikely to replace more familiar standards of attention, caution and care.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.