My one-man campaign against fear started on a chilly evening in San Francisco. I was discussing the future and technology as part of a book tour, and I only had time to answer one more question. From the stage, I called on a guy in his 50s. When he started talking, it quickly became obvious that he was quite upset. He was living in fear that texting and the Internet were stealing his girls, about 12 and 14, from him and his wife.
"They spend all their time like this!" he said, pretending to hunch over a smartphone. "I'm worried they won't be able to communicate with normal people. How will they ever get a job?" Technology, he feared, was stealing his daughters—and I represented those insidious smartphones. He was so upset that he was yelling by the end of his question, and security began closing in on the man.
I understood why he was fearful, I told him: because he loves his daughters and wants them to have a good future. The fact is his daughters' smartphones just haven’t been around as long as TV; we haven’t yet established norms, or language, for what's socially acceptable and what's off limits. Gadgets and technology may change quickly, but people and our behavior does not. In 20 years, his fear about smartphones taking his daughters will seem quaint. We are currently in the middle of coming to grips with what these devices mean to us. This isn’t a technology problem; it’s a broader cultural conversation about what kind of future we want to live in. We need to have more conversations in our families, in our offices, and in the media about what we want and what’s acceptable
Over the last two years, I've been studying fear and technology—or the future of fear. I've seen some patterns for how fear and technology move into our lives. It can be broken into four stages.
Stage 1: “It will kill us all!!”
Reaction upon first hearing about some early scientific or technology research being conducted by a university, corporation, or government. Usually the coverage appears in a technology or science magazine with a snappy headline but little substance. Next comes a clever screenwriter who turns the wee bit of science into a science fiction popcorn thrill ride that literally shows us how the world will end but contains even less real science.
Synthetic biology is one example of a technology currently in Stage 1. An editor at a large science magazine once told me that he loves the nascent science because no story can go from obscure university research paper to global apocalypse in fewer than 500 words like synbio. And it’s true: When people have no context, they hear about a new technology and they quickly extrapolate it out into a sci-fi-infused end of days. James Cameron’s Terminator story cycle, although fun, has really left a deep and fearful scar across popular culture's understanding of artificial intelligence. (You would be surprised at how many very serious questions other futurists and I get about the robot apocalypse.)
Stage 2: “It will steal my daughter!!”
Reaction upon the dramatic success of said technology following its introduction to the market. This is usually where the moral panic sets in, perhaps brought on by a conversation in a bar/church/school-pickup line or a quick story on the nightly news telling you that you need to be afraid (typically accompanied with video of children walking home from school and using the technology).
Stage 2 is all about context. The technology is still unfamiliar, but now it has hit the market and invaded our lives, taking clear and malicious aim at all the things we love and care for. The angry father in San Francisco is the perfect example of that. He was struggling to make sense of a technology he didn't completely understand and the affect it might have on the people he loved.