Future Tense

What if Computers Know You Better Than You Know Yourself?

I recently read about the launches of both an “ultrasecure” mobile phone for protecting privacy and a clip-on camera that takes a picture of everything you do at 30-second intervals. Our cultural relationship with data is more complicated and contradictory than it has ever been, and our debates on the subject almost always center on privacy. But privacy, the notion that only you should be able to control information about yourself, cloaks a deeper tension between information and meaning, between databases and insights.

Our digital breadcrumbs now tell stories about us that are deeply secret, moving, surprising—and often things we don’t even know about ourselves. These days when a computer crunches the numbers and tells you “this is who you are,” it’s hard to contradict because there’s more data about you in the machine than there is in your head. Algorithms are most effective at curating the information that’s hardest for us to hold in our heads: how long we talk to mom or what day of the week we splurge on an extra cookie.

The idea that a computer might know you better than you know yourself  may sound preposterous, but take stock of your life for a moment. How many years of credit card transactions, emails, Facebook likes, and digital photographs are sitting on some company’s servers right now, feeding algorithms about your preferences and habits? What would your first move be if you were in a new city and lost your smartphone? I think mine would be to borrow someone else’s smartphone and then get Google to help me rewire the missing circuits of my digital self.

My point is that this is not about inconvenience—increasingly, it’s about a more profound kind of identity outsourcing. The topic has come to the forefront for me because my research center at Arizona State University is helping to stage Emerge, an annual event mashing up technology, performance and deep thinking about the future, and our 2014 theme is the future of me. (Disclosure: Future Tense is a partnership of Slate, ASU, and the New America Foundation.) What happens when important parts of “me” exist only online? When hackers took over Wired reporter Mat Honan’s Google account, they were able to compromise his social media profiles, plaster the Internet with vile messages in his voice and, worst of all, remotely wipe all of his Apple devices, erasing “a year’s worth of photos, covering the entire lifespan of [his] daughter.” This was identity theft, but it was also a kind of identity lobotomy, destroying parts of Honan’s life and, most likely, fundamentally altering who he is.

Horror stories like this only show part of the picture, however. Most of us are not wrestling with identity lobotomy but something more like adolescence, where our data is sprouting up in all sorts of weird and awkward places, pumping out signals about us we can barely understand, much less control. Consider “micro-targeting,” where political and advertising campaigns can refine a message for an individual voter with startling precision. The inferences that Google or Netflix or Amazon make about who you are can occasionally be privacy invasions—as various legal disputes demonstrate—but they are also identity problems. Our digital selves shadow us in job interviews, first dates, loan evaluations, and insurance claims, and many of these identities are hidden from us on servers where we are distinctly not invited.

But of course we’re not surrendering our iPhones or our cloud-based storage anytime soon, and many have begun to embrace the notion of the algorithmically examined life. Lifelogging pioneers have been it at it for decades, recording and curating countless aspects of their own daily existences and then mining that data for new insights, often quite beautifully. Stephen Wolfram crunched years of data on his work habits to establish a sense of his professional rhythms far more detailed (and, in some cases, mysterious) than a human reading of his calendar or email account could offer. His reflections on the process are instructive: He argues that lifelogging is “an adjunct to my personal memory, but also to be able to do automatic computational history—explaining how and why things happened.” We may not always be ready to hear what those things are. At least one Facebook user was served an ad encouraging him to come out as gay—a secret he never shared on the service and had divulged to only one friend. As our digital selves become more nuanced and complete, reconciling them with the “real” self will become harder. Researchers can already correlate particular tendencies in Internet browsing history with symptoms of depression—how long before a computer (or a school administrator, boss, or parent prompted by the machine) is the first to inform someone they may be depressed?

When we start depending on our computers to explain how and why things happened, we’ve started to outsource not just the talking points but the narrative itself. The machines can be Vogon-esque in their rigidity, like the algorithms that fired a warehouse worker for missing a day when his baby was born. They can also be minutely insightful, like the Netflix system that breaks movies down into 76,897 categories.

In history, in business, in love, and in life, the person (or machine) who tells the story holds the power. We need to keep learning how to read and write in these new languages, to start really seeing our own shadow selves and recognizing their power over us. Maybe we can even get them on our side.

On the evening of Friday, March 7, ASU will explore “The Future of Me” at Emerge: A Carnival of the Future in Phoenix. For more information and to RSVP, visit emerge.asu.edu.