Future Tense

You Are Your Data

And you should demand the right to use it.

NIKE+ FuelBand
Quantified Self began as a Meetup community sharing personal stories of self-tracking techniques, and is now a catchall adjective to describe the emerging set of apps and sensors available to consumers to facilitate self-tracking, such as the Nike Fuelband.

Photo by Mike Segar/Reuters

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. On Thursday, Nov. 14, Future Tense will host an event on how technology affects obesity at the New America office in Washington, D.C. For more information and to RSVP, visit the New America website.

We are becoming data. Every day, our smartphones, browsers, cars, and even refrigerators generate information about our habits. When we click “I agree” on terms of service, we opt in to systems in which we are known only by our data. So we need to be able to understand ourselves as data, too.

To understand what that might mean for the average person in the future, we should look to the Quantified Self community, which is at the frontier of understanding what our role as individuals in a data-driven society might look like. Quantified Self began as a Meetup community sharing personal stories of self-tracking techniques, and is now a catchall adjective to describe the emerging set of apps and sensors available to consumers to facilitate self-tracking, such as the Fitbit or Nike Fuelband. Some of the self-tracking practices of this group come across as extreme (experimenting with the correlation between butter consumption and brain function). But what is a niche interest today could be widely marketed tomorrow—and accordingly, their frustrations may soon be yours.

Over the last year, I talked with many members of the Quantified Self community to understand how they use their personal data in their everyday lives. Throughout my research, I kept hearing variations on a theme: People complained that they can’t access their data, and that it is siloed in proprietary platforms and interfaces. The QS community values the ability to correlate across datasets, to visualize in novel ways, to ask questions of the data that a proprietary interface might not allow. But firms’ proprietary controls over data often limit individuals’ ability to derive personal insights about their health, behaviors, and so on. This may sound like something that only interests a small, tech-savvy group of people, but as consumer self-tracking takes off and we connect our homes to the Internet of Things, we will all need to start developing new data literacies. So the QS community’s gripes today give us an indication of where policies will need to evolve.

Unfortunately, much of the way we talk about our interests in our personal data relies on anachronous analogies to the physical world. We instinctively say, “I should own my data.” But  “ownership” over data suggests that we prevent others from it—and that doesn’t align with the realities of how easily data is copied and transferred. We’re using apps and sensors to create the data, and sending it off to cloud servers run by firms with legal claim over the large-scale datasets they are helping co-create. Privacy rights, too, miss the point. Privacy is a negative right—it obliges others to leave you alone. But the biggest pain point for QSers is not keeping other people out; they are trying to make use of their own personal data.

Instead, I propose that we should have a “right to use” our personal data: I should be able to access and make use of data that refers to me. At best, a right to use would reconcile both my personal interest in the small-scale insights and the firms’ large-scale interests in big data insights from the larger population. These interests are not in conflict with each other.

Of course, to translate this concept into practice, we need to work out matters of both technology and policy.

What data are we asking for? Are we asking for data that individuals have opted into creating, like self-tracking fitness applications? Should we broaden that definition to describe any data that refers to our person, such as behavioral data collected by cookies and gathered by third-party data brokers? These definitions will be hard to pin down.

Also, what kind of data? Just that which we’ve actively opted in to creating, or does it expand to the more hidden, passive, transactional data? Will firms exercise control over the line between where “raw” data becomes processed and therefore proprietary? If we can’t begin to define the data representation of a “step” in an activity tracker, how will we standardize access to that information?

Access to personal data also suffers from a chicken-and-egg problem right now. We don’t see greater consumer demand for this because we don’t yet have robust enough tools to make use of disparate sets of data as individuals, and yet such tools are not gaining traction without proven demand.

Making data available in usable formats also introduces the potential for portability, and with it new levels of competition in the market. But true portability will be challenging to achieve—proprietary devices will make it hard to transfer movement from one activity tracker to another.

And how does a right to use data relate to other personal data interests like privacy and security? These of course are important considerations, but we often conflate these interests, derailing productive conversations about our demands as individuals.

We could go about introducing a right to use data in one of two ways. An EU-style approach would introduce the concept as a fundamental right, implementing from the top down. And a U.S.-style approach might convene an industry consortium to address these concerns, adopting a self-regulated standard set of human-centered policies that give more power to individuals to make use of their data.

Approaches along these lines are emerging. The proposed updates to the EU data protection directive explicitly require data to be available in a “usable” format, potentially addressing the technical standards problem faced by many trying to get APIs outputs to play nice with one another. And existing laws lay the foundations: The Fair Credit Reporting Act exposes specific data to consumers, and the U.S. Privacy Act includes the right to inspect and correct data held by governments. But these provisions don’t go far enough.

Everything about our lives is in the process of becoming data. Some suggest avoiding quantification as a subversive means of resistance, but that will be about as effective as hiding our heads in the sand. Instead, we need ways of understanding our own data profiles, so that we can understand how others see us. If we do nothing, in the next generation we will be judged by our data, but we will end up in a Kafkaesque scenario in which we cannot know on what data points we are judged.