The privacy implications of using digital habits to track mental health.

Your Digital Habits Can Indicate Mental Health Problems. It Could Be a Privacy Nightmare.

Your Digital Habits Can Indicate Mental Health Problems. It Could Be a Privacy Nightmare.

The citizen’s guide to the future.
Sept. 25 2017 7:11 AM
FROM SLATE, NEW AMERICA, AND ASU

A Sane Person’s Privacy Nightmare

Is the internet a safe place for your most revealing thoughts and behaviors?

500178099
Companies want to use smartphone data to spot and address problems earlier, which is great—and a possible privacy nightmare.

lzf/iStock

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. On Thursday, Sept. 28, at 9 a.m., Future Tense will hold an event in Washington on mental health and technology. For more information and to RSVP, visit the New America website.

About 7 in the morning last October, I was walking down Eighth Avenue in New York City with a companion—a psychiatric researcher—when we ran into a man yelling at a Wi-Fi kiosk. He was to our left, this yelling man, disheveled, dirty, and distressed, gray with the grime and ground-downness of the homeless, as he yelled across our path at the curbside object of his ire—a slim, shiny, black-and-silver tower almost 10 feet tall. Called a Link, this tower had replaced a curbside pay phone. It was one of hundreds of Wi-Fi kiosks that the city of New York’s LinkNYC program had been installing to spread free high-bandwidth Wi-Fi around the city.

Advertisement

You dungeon!” the man yelled at this tower. “You fucking dungeon!

We didn’t stop to help this man. It didn’t seem a likely therapeutic moment, and although my companion, Tom Insel, whom I was spending time with so I could profile him for the Atlantic, was a psychiatrist, he wasn’t that kind of psychiatrist. Insel was a researcher who had not treated patients for almost 30 years and who for most of the prior 15 had directed the National Institute of Mental Health. I was writing about him because he’d recently taken a job running a brand-new mental health care unit at a Google spinoff called Verily.

Insel’s job at Verily was to figure out how to use technology to create faster, cheaper, better ways to help the mentally ill. Our society has a lot of room to improve in that realm. In the U.S., the lag between first symptoms to treatment typically runs six to 10 years for depression or anxiety and up to three years for psychosis. Mortality due to suicide, depression, anxiety, and other serious mental health problems has been largely unchanged for half a century. While our mental health care system helps people, it does so slowly, it misses a lot of people, and it’s not improving much.

Insel had joined Verily to change that. He wanted, in other words, to help this disturbed man on the street. He wanted in particular to use the data about behavior that the smartphone can harvest to create systems that could sense and respond to mental distress earlier and more gently than we do now.

Advertisement

In this venture he faced lots of competition. In addition to Google’s Verily, at least a dozen tech and health companies and many academic researchers are also exploring how they might use data-driven digital technology, including online therapy chatbots and variations on group chats, to improve mental health care. In fact, six months after my walk with him on Eighth Street, Insel, wanting a more nimble platform, left Google to join a much smaller company, a startup called Mindstrong, to develop mental health assessment models based on how people use their smartphone keyboards.

That approach is fairly typical of the companies creating this new sector we might call connected mental health care. They tend to focus not on conventional diagnostic checklists and face-to-face therapy visits but on building looser, more comprehensive assess-and-intervene models based on smartphone data and largely digitized social connections. The first step in these models is to harvest (on an opt-in basis) the wealth of smartphone-generated data that can reflect one’s mental health—how you move, speak, type, or sleep; whether you’re returning calls and texts; and whether you’re getting out and about as much as usual. Such data can quickly show changes in behavior that may signal changes in mood.

The idea is to use that data to spot and address problems earlier, so you can do so more gently. When psychosis isn’t identified until the person has withdrawn into psychosis and become isolated and perhaps jobless and homeless, the first intervention may often be a traumatic hospital admission, and the treatment path ahead long and tough. But if you can spot the early signs in changes like the person’s withdrawal from relationships (as shown in chat, phone, or email use) or the kind of disjointed syntax that can signal the emergence of psychotic thinking, the therapeutic responses can be quicker, less formalized and disruptive, and more patient-directed. “Detect earlier, treat more gently,” as Insel puts it. For instance, peer support and counseling—starting with simply comparing notes and experiences with others facing similar problems—provided through secure, roughly Facebook-like social groups has been shown to help people ward off or recover from mental problems ranging from anxiety to psychosis. Online networks can provide crucial features including anonymity and 24-7 connection to both familiar peers and (depending on the system’s design) professional therapy. Altogether, such distributed, ongoing, real-time data collection and social connection has the potential to create a more patient-centered health care that supplements or even obviates traditional therapist- or drug-based care.

This, then, is what our hyperconnected smartphone world may offer mental health care: quicker, faster, cheaper, and better. Which would be huge.

Advertisement

Of course none of this was helping our distressed gentleman on Eighth Avenue. At the time, a bit flummoxed, I saw only a passing connection between the plight of this apparently paranoid man and the emerging world of connected health care that Insel and I were discussing. I merely noted aloud, after we’d walked past in an awkward silence, that “psychiatry”—by which I meant agonizing mental distress—“is everywhere.”

Insel nodded.

You fucking dungeon!” the man, well behind us now, yelled one last time.

It was only later, when I tried to make sense of what he was yelling, that I realized that his cries voiced one of the most difficult and vital questions about how to use connected technology to aid the distressed. Who will have access to our most intimate conversations? In this man’s dungeon lurked a riddle.

Advertisement

* * *

The connected health care model that Verily and others are pursuing has the potential to put some of our most personal and consequential information at risk. I once thought such data are relatively safe. But the current membranes around our health care data are astoundingly porous. In 2015, for instance, Dan Munro of Forbes, mining industry reports, found that more than 100 million health care records were compromised that year alone. Some were revealed through sloppiness, some through loss or theft of devices. But the biggest portion was exposed by the kinds of simple hacks—phishing scams, for instance—that have been used to steal so many emails and files from the unwitting, including, apparently, the 2016 Clinton campaign. One 2015 phishing attack on the health insurance giant Anthem, for instance, exposed 79 million health care records. Other insurers and health care providers that year gave up scores to millions of records.

2016 was no better. According to the security firm Bitglass, the volume of records breached stayed steady, but the number of breaches rose to a new high of 328—a leap of some 22 percent. Unauthorized accidental disclosure (that is, sloppy practice) increased, and hacking and “IT incidents” (the exposures most likely to be proactively exploited) grew to account for 80 percent of all leaked records. According to Healthcare Informatics, the dark web was flooded with so many patient records that the glut drove down the per-record price. Former White House chief information officer Theresa Payton, now the CEO of the cybersecurity firm Fortalice, predicts such hacking will focus increasingly on patient health information because its value will rise as more obvious targets like credit card numbers or email passwords become better protected. In other words, health care is behind the curve.

Its vulnerabilities abound. Recent breaches include an email hack that exposed 3,400 patient records at a children’s hospital, a phishing scam that snagged 1,000 records from a community hospital, a ransomware attack that held hostage 77,000 patient records from a Kansas hospital, and a breach that exposed the patient records of 106,000 Michigan physicians. That was just two weeks’ worth of reports in HIPAA Breach News. The feed makes an alarming read.

Advertisement

For now, however, these leaks are going largely overlooked by the public. The health care industry is thus enjoying a gravity-defying period in which, as Kaveh Safavi of Accenture told Munro, “privacy is dead but trust isn’t.” This can’t last long. The barriers around our health care data resemble less firewalls than toilet paper. When they meet the element, they fall apart, and things get messy.

* * *

You fucking dungeon!

Wrapped up in reporting the Insel profile last fall, I forgot about the yelling man on Eighth Avenue until, reviewing my notebooks to write the story one winter day, I came across my scribbles from that October morning. Dungeon. Why did he call a Wi-Fi kiosk a dungeon?

Advertisement

Turning to Google, I learned that the English word dungeon emerged several centuries ago from the French donjon, which in turn is thought to have descended from dominio, which is Latin for lord or master. Dungeon today therefore generally refers to a dark underground cell beneath a castle—the meaning we’re all familiar with from movies and New Yorker cartoons. However, both donjon and dungeon originally referred to and still refer to, secondarily, a castle keep. A castle keep is a fortified tower, stronger than the other castle parts, that stands within the castle’s walls and serves as its lord’s last and strongest refuge.

In short: a tower of power.

Each gleaming, monolithic LinkNYC kiosk, 9.5 feet tall and as sleek and angular as a Libeskind skyscraper, is very much a tower. The Links were designed to provide not just free high-speed Wi-Fi to anyone within a hundred feet or so, but, through a keyboard, headphone jack, and iPad-sized display installed on the face of each tower, free domestic internet phone calls and hands-on browser access as well. When the first couple hundred units were turned on in Manhattan and Queens in the summer of 2015, they immediately created conflicts. People placed long, loud phone calls. People plugged in speakers and blasted music. People brought milk crates and buckets on which to sit and watch porn. Some got a bit too hands-on about this. At least one such crate-squatter reportedly indulged publicly in behavior that, though essentially universal, is usually indulged in private.

Meanwhile, people concerned with digital security wondered what all they might be sharing with LinkNYC and its partners when they click “yes” on the terms and conditions box to get the free Wi-Fi. In this age of metastatic surveillance and vanishing privacy, a government-installed citywide array of Wi-Fi towers equipped with microphones couldn’t help but prompt what one might call reality-based paranoia. Most people will trust these towers. Others, more security-minded, will not. My 27-year-old son, for instance, is an investigative journalist highly concerned with digital privacy. He told me, “No fucking way I’d use that.” They certainly don’t meet the bar of, “Don’t use public Wi-Fi anywhere you wouldn’t go barefoot.”

So what did all this have to do with Tom Insel or Google or connected health care? Among the members of the public-private partnership that designed and built the LinkNYC system was a small outfit called Sidewalk Labs that happens, like Verily, to be a Google spinoff. Sidewalk, dedicated to “reimagining cities from the Internet up” believes the future city will have “ubiquitous connections.” Its partnership in the LinkNYC system is part of its effort to move that future closer.

I don’t mean to suggest anything intentionally sinister here. Sidewalk Labs probably has the finest intentions, as do Tom Insel, Verily, Mindstrong, and doubtless virtually everyone exploring this connected mental health care approach. But in this tangled, intricate, connected web we’re creating—the web in which this man just happens to be yelling past Insel into a door that leads to Google—the tirade seem less a random paranoid rant than a prescient representation of the challenge Insel and others like him are taking on. If this challenge isn’t met—if this emerging sector can’t keep the secrets it harvests, operates with, and lives on—the yelling man on Eighth Avenue will look like a genius.

One more thing

You depend on Slate for sharp, distinctive coverage of the latest developments in politics and culture. Now we need to ask for your support.

Our work is more urgent than ever and is reaching more readers—but online advertising revenues don’t fully cover our costs, and we don’t have print subscribers to help keep us afloat. So we need your help.

If you think Slate’s work matters, become a Slate Plus member. You’ll get exclusive members-only content and a suite of great benefits—and you’ll help secure Slate’s future.

Join Slate Plus

David Dobbs writes on medicine, science, and culture for Slate, the Atlantic, Pacific Standard, the New York Times, and other publications.