Future Tense

What if Your Cellphone Data Can Reveal Whether You Have Alzheimer’s?

We’re getting closer—but there are some tricky questions that need to be addressed.

Moodboard/Thinkstock
Every time we email, make a call, write a Facebook message, or record a Snap on Snapchat, we are providing indirect evidence of our neurological health.

Moodboard/Thinkstock

Many of us have seen loved ones progress through a neurological disorder. First you notice changes in their speech patterns, then their walk becomes less stable, then they become slower overall. These conditions worsen as they progress through the disease. With every action that we take, we provide the world with some indirect evidence of our neurological health.

Let’s take talking as an example. It’s a deceptively complicated activity. We must think of the words to convey our message, organize the words in compliance with the rules of our language, and activate more than 100 muscles to produce intelligible speech. This process requires coordination across multiple regions of the brain. If there is a disturbance to any of these regions, it becomes apparent in the resulting speech. What we say and how we say it is actually a window into the health of our brain. After all, just a couple of cocktails affect the brain in ways that we can detect with our naked ears and eyes.

We do research at the intersection of data analytics and neurological disorders. (We are professors at Arizona State University; ASU is a partner with Slate and New America in Future Tense.) And we wondered: What if we can measure aspects of everyday activities like talking, walking, and typing to objectively quantify and track our brains’ health and function over time?

Enter the smartphone. The mobile and web applications we use consistently monitor our daily patterns. Every time we email, make a call, write a Facebook message, or record a Snap on Snapchat, we are providing indirect evidence of our neurological health. By one estimate, every day, the people spend an average of four hours on their smartphones. Our phones are fit with a variety of sensors that can provide health-relevant information, such as what our speech sounds like, how we communicate, where we go, how fast we move, and how we change our facial expressions.

That means there is a significant opportunity to leverage these new sources of data for improved decision-making and diagnosis in the medical fields. Imagine a health care professional being able to monitor your aunt’s speech in phone calls to optimize her medication for Parkinson’s disease. Imagine algorithms applied to your grandpa’s emails that his doctor uses to track his Alzheimer’s disease progression. Imagine a phone monitor that alerts a family when their son may have stopped taking his medication for bipolar disorder at college based on unusual changes to his daily patterns.

In our work, we use principles from signal processing, mathematics, and speech/language analytics to solve, what we call in engineering, an inverse problem: Given long-term data from an individual, what can we say about his or her neurological health? As proof of concept, we have worked on retrospective, opportunistic data sources from public figures to test our algorithms. Our first case study was President Ronald Reagan, who was diagnosed with Alzheimer’s disease in 1994. By analyzing the complexity of the language that he used in press conferences, we identified changes in language that were consistent with dementia six years before his official diagnosis in 1994. Similarly, computational analysis of YouTube interviews from the late boxer Muhammad Ali revealed a slowing down and slurring of his speech consistent with Parkinsonism well before his official diagnosis in 1984.

More recently we performed a similar analysis on active players in the National Football League to identify potential early signs of chronic traumatic encephalopathy, or CTE. This study showed that there were clinically interpretable and concerning changes in language use in active players in the NFL. Their sentences became simpler over their years, and their vocabulary size was reduced. Similar changes have been observed in populations with mild cognitive impairment and dementia. Now, we have partnered with industry and clinics to prospectively evaluate these speech and language analytics on a large scale nationally.

This is all very promising. However, doctors can’t yet diagnose someone from afar with only smartphone data. Before we can use these new sources of data to inform clinical decision-making, we need to overcome some significant technical challenges. While we have recently seen important advancements in machine learning and artificial intelligence, this work has been largely driven by consumer applications, with Google, IBM, Facebook, Microsoft, and Amazon leading the way. These companies have developed deep learning models that achieve near-human performance on certain tasks, even as their workings are largely incomprehensible to human users. Most people use the Amazon Echo without an understanding of the A.I. algorithms that underlie the speech recognition engine. And for the most part, they don’t need to know. If your Echo doesn’t understand you, you simply repeat the sentence until it does or you give up.

But health applications are fundamentally different. The repercussions of a misdiagnosis can be lethal. That’s why doctors are currently wary of relying solely on A.I. for diagnosis. To build trust, clinicians need to have access to interpretable evidence to confirm any diagnosis or assessment from an A.I. engine. These algorithms shouldn’t only provide a final diagnosis but also interpretable evidence for why a decision was made. For example, an A.I. engine that predicts a final diagnosis of Parkinson’s disease may back up this claim by providing additional information regarding the person’s gait, speech patterns, tremor, etc. Such metrics are interpretable to clinicians as they appear on existing test batteries to diagnose Parkinson’s disease. While we have seen advances toward this goal recently, we still have a way to go.

In addition to the technical obstacles, there are also policy challenges. How do we strike a balance between maximal health benefits and privacy? Most of us accept that Facebook, Google, and Snapchat will sell our data for targeted advertising. In exchange, they provide us with services we value. Are we equally OK with the same companies selling our data to health insurance providers for risk assessment? What happens if users take actions based on this data without medical consultation? What are Facebook’s ethical responsibilities if it determines that someone is at risk for a mood disorder based on changes in a user’s posting history? What are the doctor’s responsibilities if he makes an incorrect decision based on information from an A.I. engine?

These are complicated questions that may require additional regulation. Technical innovation in A.I. is outpacing the policy innovation needed to deal with the very significant changes we will see in the future. And that future is approaching sooner than we all think.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.