Future Tense

The End of the Self?

Technology is upending our traditional definitions of who we are.

Japanese electronics giant Hitachi unveils a prototype model of a portable brain-machine interface at the company's headquarters in Tokyo, May 2007.
Japanese electronics giant Hitachi unveils a prototype model of a portable brain-machine interface at the company’s headquarters in Tokyo in May 2007.

Photo by Yoshikazu Tsuno/AFP/Getty Images

It’s 1630. Galileo is prepared to argue that the Earth revolves around the sun. For many of his contemporaries, however, the best arguments against him are not the theological ones of the church, but simple common sense. Anyone can simply stand on solid ground and watch the sun revolve over them: Sun moves, Earth doesn’t. Moreover, if he’s right, there would be enough wind to blow everyone off the planet, which obviously is not happening. In short, anyone with eyes and a hankering for experimental evidence can see that Galileo is just wrong.

Any discussion of the “self,” or related concepts such as “consciousness” or “free will,” faces the same problem. The phenomena are complex and, despite challenging the best minds throughout human history, still unexplained. Nonetheless, because everyone has immediate and intuitive access to themselves, everyone knows what consciousness is, knows that they exercise free will, and has common-sense ideas about their “self” that are powerful and unchallengeable. And yet these ideas, clear as they are, are also obviously inadequate—if not seriously delusional.

But just as changing the early European perspective on the universe had deep social and cultural reverberations, technologically driven disruption of our “self,” and challenges to what we believe to be our fundamental identity, are potentially very disturbing psychologically and destabilizing socially. Thus, if these concepts are shifting dramatically, it is a serious concern. We can deal with a certain ambiguity about what constitutes ourselves, and our identities, and our consciousnesses; it is when we are driven too fast and too far beyond that ambiguity that we get into problematic terrain.

Scientific experiments have shown, for example, that under some conditions, the unconscious mind “decides” on hand movements well before the conscious decision to move the hand, suggesting that free will under at least some circumstances is illusory. Experiments with transcranial magnetic stimulation have shown that application of a magnet to a particular location on an individual’s skull will result in significant changes in moral judgment, a function that many people associate intimately with their self and conscious behavior. Less esoterically, military training for centuries has been based on the need to change the “self” of individual recruits into integrated units that will follow orders even when there’s high risk of death. Studies of mob behavior have long revealed the fact that people’s behavior changes dramatically when they’re in a crowd: People who are ordinarily nonviolent and friendly all too easily coalesce into lynch mobs. And we’re all familiar with the dramatic changes in personality that chemicals, injury, or brain tumors can cause.

Self, consciousness, and free will—easy turf to get lost in. So rather than doing that, let’s instead recognize that, whatever else we may be, we are an information-processing species. A self is constructed of information, consciousness is about managing information, and free will, if it is to mean anything, requires us to have and process information about ourselves, our environment, and the (probable) results of our actions. This makes one point crystal clear: Anything that profoundly changes information will profoundly change us.

And it is obvious that the information ecosystems we live in are changing dramatically. Google’s executive chairman, Eric Schmidt, is famous for having noted that today we produce as much information in two days as we did from the beginning of civilization to 2003. (Schmidt is the chairman of the New America Foundation board; New America is a partner with Slate and Arizona State University in Future Tense.) A more subtle change might be Google itself: In a quotidian and unremarked few years, it has granted each of us the powers of gods. How? It gives us immediate access to the world’s accumulated memory—and anyone with that capability even 30 or 40 years ago would have been a deity, or at least a major superhero. It isn’t that we necessarily know how to use that superpower, of course. Porn and pouting cats seem to keep most people firmly grounded. And consider augmented cognition, in which cognition occurs at the level of integrated techno-human networks, not at the level of the individual self. So-called augcog is not that strange: Modern cars don’t just have speed control, but can maintain safe speeds given the traffic and conditions around them and can park themselves. Google, of course, already has autonomous cars in which passengers select the destination and the car drives; such cars may already be safer than those with humans at the wheel. Because conflict gets more complex, and relevant data streams from sensors, robots, unmanned aerial vehicles, and other sources are increasingly overwhelming to individual soldiers, militaries have been exploring augcog for more than a decade.

The category of “virtual sin” exemplifies another challenge. Is eating “virtual pork” acceptable for Jews? Is “virtual adultery,” in which one partner is engaging in virtual sex with someone who is not her partner, grounds for divorce? And, assuming no physical contact occurs as part of the virtual relationship, should the grounds for divorce be adultery or, rather, abandonment? After all, what appears to be happening is that aspects of one’s personality that cannot be expressed in the real world (because, for example, one is married) are expressed in the virtual world, with a consequent loss of time, attention, and personal intensity. In other words, the “self” that one married has been fragmented until, in actuality, one has been abandoned even if the physical wetware remains. Put another way, how many “selves” in how many environments and techno-human networks does any individual have the time and attention to maintain?

A qualitative change in our information environment that is every bit as seismic as the meteor that marked the end of the dinosaurs. Deity-scale information capability. Complexity driving cognition to ever more competent techno-human networks. Perceptual, conscious, and subconscious processing increasingly outsourced to technology systems. Fragmentation of self across avatars in various increasingly engaging virtual realities. But any such list is misleadingly simplistic. The technological evolution impacting the self is not simply a case of interesting but isolated case studies but, rather, represents profound and accelerating evolution across the entire technological frontier. And the conscious self is where these must be integrated, or at least collated.

Whether the evolution of digital selves in response to such foundational technological change is “good” or “bad” is hotly debated, but in the end such debates are beside the point. Information appliances, including sophisticated ones like humans, would be grossly unfit if they failed to evolve under the circumstances. Those of us who grew up under much different conditions can rightly note the differences, but we cannot thereby claim any moral high ground, nor, for that matter, can we criticize those who develop appropriately for a much different world. Moreover, the degree to which we can shape this digital technological tsunami may be far less than we naively think. That the era of the digital self is upon us is increasingly clear; what that means for the most complex of information processors is only beginning to emerge.

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and SlateOn the evening of Friday, March 7, ASU will explore the “Future of Me” at Emerge: A Carnival of the Future in Phoenix. For more information and to RSVP, visit emerge.asu.edu.