Future Tense

Does That Look Like Me?

How planting false digital footprints can help you regain some privacy online.

Our lives rely on always being connected.

Photo illustration by Juliana Jiménez. Photo by Thinkstock

You know that you’re being watched online. You know that your data is constantly being collected. The lack of privacy that tech users experience on a daily basis on countless platforms—from social media to search engines to retail sites—is obvious. Yet we quietly accept it, because most of us don’t know what else to do.

Opting out isn’t really even an option. Our lives, both personal and professional, depend on being connected. This reliance on goliath systems—ones average users know very little about—is asymmetrical and disempowering.

But New York University professors Helen Nissenbaum and Finn Brunton have a proposal. In their new book, Obfuscation: A User’s Guide for Privacy and Protest, they advocate taking evasive action, or what they call obfuscation. They define obfuscation as “the deliberate addition of ambiguous, confusing, or misleading information to interfere with surveillance and data collection,” and they see its potential as a means of redress. For example, they discuss software that generates misleading Google queries (so the tech giant can’t get a read on you) to Tor relays, which hide IP addresses. Think of it as creating a diversion, or planting false footprints. Security expert Bruce Schneier has made similar suggestions—like searching for random people on Facebook, or using a friend’s frequent-shopper card at the grocery store. If you can distort the government’s or companies’ profiles of you, you’ve scored a small victory.

Obfuscation tackles the ethical debates surrounding obfuscation and discusses the varying aims of the tactic. Brunton and Nissenbaum acknowledge that obfuscation is an imperfect, piecemeal approach, but it’s one they believe could be a useful—even necessary—tool.

I spoke with Finn Brunton about Obfuscation and the discussions he and Nissenbaum hope to inspire.

Who are you trying to reach with this book?

Our goal for our readership is kind of threefold. First, people in general, literally anyone who is interested in how contemporary privacy online can work.

At the same time, we have a couple of more specific groups in mind. One of them is communities of actual developers, people who are producing new kinds of software tools, who can potentially take advantage of these techniques to allow themselves to provide services and even create businesses that won’t end up compromising the information that their users provide, in one way or another.

And then the last group is actually people who are making policy and creating regulations around information and the collection of user activity online, who may be able to find in these ideas, we hope, provocation to think about how that information could be protected, without losing functionality.

People often say that you should just delete your Facebook account, say, if you don’t want your information to be collected. But you say that’s a fantasy. Can you explain?

The social cost of opting out has become so high that opting out is essentially a fantasy. This has always been part of the argument around digital privacy, which is “Well, if this bothers you—that your data is being collected and strange things are being done with it, that it’s being used to manipulate you or to create dossiers on people like you and having these unintended social effects—then that’s your problem and you should address it by just not participating.”

And that may have been a reasonable position for a decade and a half or so, but it’s definitely not a reasonable position anymore. So you can’t opt out, and when you can’t opt out and you are using the service, you are providing data to them in many different ways. You don’t know what is happening with that data. There is a difference in power between you and many of the large services and institutions that you have to engage with. You and they are not equal partners. You’re not coming into it, making a decision as an equal; you don’t really get to have a choice.

By the same token, there’s an information asymmetry. When you give up your data in the course of using the service, you don’t necessarily know what data about you is being collected. You don’t know, unless you are a very skilled technical professional, what can be done with that data. We don’t know how these systems are going to continue to improve. We don’t know what kinds of innovations and breakthroughs are going to come out of this space. It’s an enormously exciting space, intellectually and technically. New sorts of developments and discoveries are happening all the time. We don’t know what kinds of things those may reveal and we don’t necessarily know how our data will continue to circulate when, for example, a service goes out of business and your data ends up being part of their assets that ends up being sold along or when they get acquired or something else happens. Or, for that matter, when your data gets harvested by some kind of state intelligence service that’s collecting people’s online activities.

In the book, you make the ethical case for using obfuscation tactics. But is there risk of legal action for those who engage in it? For example, purposefully misusing a digital service or interfering with terms they’ve technically agreed to?

We are really interested to see if engaging in obfuscation creates some kind of legal responses to it, because those responses will really clarify some questions about how your data is owned, who has rights to it, who controls it, and how that control is actually legally expressed. As it is, there has not been any significant legal response to the kinds of digital obfuscation practices we’re talking about, partially because many of them have been quite small-scale, often projects by individuals who are exploring the implications of doing things like this. Seeing if and how this becomes a legal issue will be a really revealing process.

You say that it’s currently not in businesses’ interest to develop more protective systems, and that we can’t trust the government to take on this responsibility. But obfuscation is very piecemeal. What do you think potential longer-term solutions might be?

I think the long-term solutions for this are going to be a mix of technical and policy. Policy in the sense that we rely on large-scale regulation, regulation at the level of states and even modes of global governance to ensure the safety and utility of our environment and infrastructure—obviously those things break down in various ways—but in a practical matter, things like the standardization and interoperation of services, like electricity, cleaning drinking water, or regulation of who’s allowed to dump what into the air. We know that these things are problems that we can rely on government to some degree to regulate for the collective good, for creating a better climate in which people can live and do their work.

I think things are going to come out of the regulatory process that structure how businesses are allowed to make use of this data, and you can already see early forms of this project taking shape in a number of different places, especially the EU.

The other side is that there are enormously exciting and interesting technical initiatives that are taking place around building a more distributed, less centralized Internet and Web experience that makes it much harder to do things like build collective dossiers on people or groups, makes it much more difficult to engage in the consolidation of data that we’ve been seeing so far.

Building a Web that makes spying much more difficult is an enormously compelling technical project, and it’s one that a lot of research is going into. So those are the two prongs that I see in the long-term solutions: technical solutions on the network side and political solutions on the regulatory side that recognize that our online lives are no longer just our online lives. They are our lives, and they need protection to the same degree that we expect our food to be not poisonous.

At the very beginning of the book, you say you want to start a revolution. What would this look like to you?

What it would look like to us is a small-scale revolution that slowly propagates. Part of what we like about obfuscation is that it’s an approach that doesn’t rely on perfect technology perfectly implemented, or everyone getting onboard at the same time.

It’s something that we hope can be picked up as a practice by individuals, groups, and different communities in steadily growing numbers as ways to be able to participate in the network while making clear that the ways in which services on the network are being paid for, the kinds of services that are being built on data from us has flaws and that those flaws need to be corrected and that there are problems with our current arrangements that demand change and that this provides a way for us to begin to do so.

[Obfuscation] is not a replacement, but rather a supplement, a complement that we would see added to the existing toolkit of privacy protection practices that range from select disclosure and shared illusion among groups up to cryptography and the many, many related technologies to that.

We would begin to see obfuscation become a common part of that vocabulary, become something that could provide use in lots of different contexts and part of the use it could provide would exactly be the way of saying, through practice, that the way we are online now is vitally important and technologically amazing, but in many ways is fundamentally unjust and works against our autonomy, and that’s not OK.

This interview has been edited and condensed for clarity.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.