Technology

An iPhone Is an Extension of the Mind

The FBI isn’t asking to access an uncrackable safe. It wants to compromise the boundaries of the self.

iphone

Victor Tongdee/Thinkstock

Should Apple have to comply with a court order compelling the company to write code that would allow the FBI to crack into a terrorist’s iPhone? Apple CEO Tim Cook thinks not. The company’s lawyers have argued on First and Fifth Amendment grounds that the state cannot compel the tech giant to weaken a device’s encryption. If that is the sole, correct framing of the issue, then society faces a simple question of how to interpret existing statutes in light of constitutional constraints.

But legal principles, not to mention technological fears, aren’t the only prisms through which we can view this case, which may have big implications for how the government treats individual privacy. Regardless of whether the state is seeking one-time access to a single Apple product or far wider powers—and regardless of whether the software it wants Apple to create could unleash an encryption-foiling backdoor key into the world where no one can control it—the FBI by its demands and Apple by its legal rejoinders have raised deeper questions about the moral significance of our devices. From their perspectives, we either do or do not have the right to use machines that, through encryption, can wholly shut out the prying eyes of investigators. But in certain philosophers’ views, that’s the wrong way entirely of looking at this dispute. An iPhone isn’t a safe. It’s an extension of the mind.

The safe analogy has been a frequent motif of this debate over the past few weeks—and one that has unfortunate consequences. Suppose, for example, that the Acme Safe Company made a safe that, when breached by anyone but its owner, automatically destroyed its contents. Is Acme doing something especially objectionable by building this safe? By asking that question, we ask if the existence of the safe itself is somehow problematic. It seems obviously not. Almost everything in the world can be transformed into something nefarious. Should the bricks with which we build houses be designed so that they cannot be thrown at another human being? The question of regulating the production and use of some object depends upon a lot more than whether that object could, in some possible world, play a role in deadly violence.

The safe analogy poorly serves any effort to make sense of the moral significance of our information-storing devices. Consider this example instead. Suppose you’ve become a memory master thanks to classes taught at Bornstein’s Memory School. A crucial feature of Bornstein’s training is that if someone tries to force you to reveal a memorized bit of information, you automatically forget that bit of information. You cannot be coerced to remember. Does the state have the authority to compel Bornstein’s to include a special backdoor into your mind so that it can hack into your memories without triggering the auto-destruct feature?  

The reason this analogy is illuminating is that it isn’t fanciful. Our electronic devices—or at least many of the processes that occur within them—are literally parts of our minds. And our consideration of Apple’s and the FBI’s arguments ought to flow from that fact.

This may sound ridiculous. But in an important co-authored essay and then in a book, the philosopher Andy Clark argued for something called the extended mind hypothesis. The basic idea was that we have no reason to treat the brain alone as the only place where mental processes can occur. 

To take a simple example, imagine that you’re using your voice to help you count—you are saying, under your breath, “21, 22, 23,” and so on. You aren’t counting only by using neurological processes. You are also using the processes involved in saying those numbers under your breath. Now imagine that instead of whispering the numbers as you count, you are instead using a pad of paper to count. In this case, you are making marks on the pad for each item you count. The pad of paper is as much a part of the counting process as your vocalizations were. So the pad is part of the counting, too. Finally, imagine a case in which you are using both your brain-based knowledge of the city and Google Maps in order to navigate your way to a museum. The transitions to and from memory to Google Maps to decisions about where to walk are seamless.

In cases like these, Clark has argued, our mental processes extend beyond our neurological processes and into the parts of the world that we regularly use. What makes it so that these parts of the world partially constitute our mental processes? Minimally, they are as easily and reliably accessible as our brain processes are.  

But, one might argue, a device running Google Maps isn’t nearly as reliably and easily accessible as our brains are! It can’t be as much a part of our minds as the processes running within our brains, right? But since when were our brain processes so reliably accessible? After a few drinks, our brain processes aren’t all that reliable. Anyone who’s had a newborn knows full well that mild, sustained sleep deprivation can leave one’s physical processes an unreliable mess. Yet that does not rule out brain processes as mental processes. To consider something an extension of the mind, what matters is that a certain threshold of easy, reliable access is met. And, our phones—our information-technology devices generally—typically meet that threshold.  

This is especially the case when it comes to the role that our phones play in both communication and information storage. There is simply no principled distinction between the processes occurring in the meaty glob in your cranium and the processes occurring in the little silicon, metal, and glass block that is your iPhone. The solid-state drive storing photos in the phone are your memories in the same way that certain groups of neurons storing images in your brain are memories. Our minds extend beyond our heads and into our phones.

So if the state demands that manufacturers provide it with backdoor access into our devices, the state is literally demanding access into our minds, or at least the minds of those people who use smartphones. This sort of partial mind reading would be a tremendous advantage for law enforcement, one it did not enjoy in the pre-digital era. But it would also massively compromise the boundaries of the self.  

The issue at the heart of this debate, then, is not merely whether the state, in attempting to prevent terrorism, might overstep traditional protections of privacy or accidentally make our credit card information more accessible to Russian hackers. Rather, it is a very deep issue about the permeability of the boundaries of the self. How much of ourselves should we give over to the state?

There is no easy answer to this question. But it’s worth remarking that people seem very comfortable with the abandonment of previously cherished forms of self-concealment. People conduct their emotional and fantasy lives across an array of social media platforms. In doing so, they distribute their mental processes across networks. This has the necessary twofold effect of both broadening one’s mental capacities—we begin to think collectively or plurally thanks to the reliable access to these networks—and shrinking the horizons of the discrete, individual self. The trade-off may not be plainly obvious, but it is required if one is to enjoy the expanded mental and agential capacities of a fully networked life.

But these networks are themselves typically owned by corporations. So, while Apple may choose not to hack into its customers’ phones—and even create encryption technology that prevents it from doing so—its customers chose to entangle themselves in Apple’s vast informational architecture. When I use Apple Music to listen to the Dead Kennedys, I am not merely trading vinyl for an MP3. I am also trading a private activity for a shared, albeit corporation-owned, activity. For listening to streaming music necessarily occurs within a decentralized network whose nodes are mostly not in my own head, but are instead spread across a system owned and (mostly) operated by a private corporation. 

This mass transformation of the private self to the networked self suggests that even a state-mandated backdoor to all iPhones would not lead to a qualitative shift in our lives. Rather, it would represent a further slide away from the discrete individual and toward the decentralized, networked self.

But the issue, some say, is not access to our minds but state access to our minds. Is this problem really less frightening than the already existing corporate access to our minds? After all, states—at least democratic states—are supposed to be embodiments of our most idealistic commitments to the networked, decentralized self. When the properly democratic state acts in our names, we all publicly act together, even if this action does not occur only in our own bodies (if it occurs in our bodies at all). So, the extension of even the nominally democratic state’s reach into our device-expanded mental lives may not be nearly as insidious as the extension of corporations’ reaches into our device-expanded mental lives.

On the other hand, if one believes that we cannot trust the state (and we probably can’t), then one should entertain even more doubts about private corporations whose overarching aim is the generation of profit. And that thought might turn this whole debate, which has focused on the state, on its head. 

The government should not be demanding of Apple that it build a special backdoor into one iPhone, or any iPhones. Rather, it should demand of Apple, Google, Facebook, and their ilk that they render opaque to themselves the operations of their own networks. There is no reason to allow corporate access to our minds while vigorously denying state access.