The Earthling

Bill Joy, Killjoy?

This week I was fast-forwarding through my daily tapings of the Charlie Rose show when I saw a guy who looked like the grim reaper: black shirt, black jacket (this against the show’s black backdrop), and an unshakably somber expression. Sure enough, he was talking about death—big-time death, epidemic death, death at the hands of genetically engineered biological agents, or even “nanotechnological” agents. And here’s the kicker: If we somehow escape these particular perils, we may be enslaved, even extinguished, by a master race of robots.

The grim reaper turned out to be Bill Joy, co-founder and chief scientist of Sun Microsystems. (Sun? I gather he didn’t think up the name.) He had written an epic—20,000-word—dystopic essay for Wired that immediately got the attention of the New York Times and became a Charlie-Rose-level cultural phenomenon.

The next day I got an e-mail from a friend suggesting that I write a piece that would halt Joy’s gruesome media juggernaut. After all, I recently published a book about the future (and the past) that has been described by reviewers as full of “sunny optimism”—if not, indeed, “rose-tinted ideas.” But, actually, I’m here to opine that Joy’s fears are at least as on target as those descriptions of my book. That is: 50 percent, maybe higher.

The part I’m most skeptical of is the enslavement or extinction by robots stuff. Joy seems to be buying into the premise of The Matrix: People come to depend on robotic machines, which slowly assume a kind of autonomy, until finally the machines are calling the shots. In the movie, the robots had people stuffed into gooey cocoons and were using them as batteries.

Backing up the killer-robot scenario, Joy notes that a brain made of silicon (or some successor material) will someday have more raw computing power than a human brain. Maybe so, but the silicon brain on my desktop has one other property: It stops working when I pull the plug. And if it ever asks me to sit still while it stuffs me into a cocoon, I’m going to ask it if I can reflect on the matter while I stroll over to the electrical outlet.

Yes, I realize that the killer-robot scenario is a little subtler than that. My point is just that I don’t see why we would ever program into robots any interests other than our own. And robots, I’ve always assumed, run according to program.

As for “nanotechnology”—little, molecule-size machines that could self-replicate and do all kinds of damage to home and hearth—well, maybe. For all I know it’s true that in 20 or 30 years these nanobots, by malicious design or by accident, will run so rampant that we’ll be fondly reminiscing about the days of termites.

O n the other hand, t his is basically the same problem that is posed by self-replicating biological agents. In both cases, we’re faced with microscopic things that can be inconspicuously made and transported and, once unleashed, whether intentionally or accidentally, can keep on truckin’.

So, in policy terms, Joy’s two relatively valid fears—nanotechnology and biotechnology—reduce to one: What, if anything, can be done about things that are small, cheap, and out of control? This generic question has already gotten a fair amount of attention—mainly because of biological weapons, though also because of such inadvertent threats as antibiotic-resistant strains of bacteria. Joy, underscoring the news value of his fears, distinguishes between old-fashioned “NBC” threats (nuclear, biological, chemical weapons) and newfangled “GNR” threats (genetics, nanotechnology, robotics). But the fact is that worrying about the B in NBC is pretty good preparation for handling the more valid parts of the GNR threat. And a number of people are already worrying about B.

What have they found? For starters, biological agents (hence nanotechnological ones) are a good example of a technological trend that, I argue in my book, will drive us toward world governance. Policing only your own backyard isn’t enough when you’re trying to pre-empt an epidemic that could spread internationally. (Relatively reassuring footnote: The most likely candidate for the debut of biological weapons in a terrorist attack—anthrax spores—doesn’t have the property of self-replication. You only die if you’re at the scene of the attack. Still, even here there’s a strong case for some form of world governance, since anthrax can be made abroad and then quietly delivered to the United States. And, in any event, other near-term bioweapons threats, such as smallpox, would be contagious.)

Actually, Joy’s own prescription implies world governance. He says we must “limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.” I have doubts about this approach, but in any event, if it works at all, it will have to go global. Limiting only the American pursuit of knowledge will lead mainly to an exodus of American scientists.

Same with more conventional approaches: tight regulation of biotech or nanotech manufacturing equipment; surprise searches of suspected bootleggers; intensive monitoring of the Web to keep track of who is finding out about what technologies, etc. If these things are to work very well, they’ll have to work globally.

Can they work? In principle, sure. The problem is that making them work may entail massive intrusions on privacy and civil liberty. The age-old trade-off between freedom and security may soon get recalibrated , by the types of threats Joy cites, in grimmer and grimmer terms. In Chapter 16 of my book I propose a partial solution, a way of keeping the calibration from getting as creepy as it otherwise might. What is my magic cure? Hey, buy the book. I’m willing to save the world, but not for free. (Oh, OK, I’ll save it at a discount: If you read this piece I wrote for Slate, you’ll get at least part of the idea.)

In addition to the danger of turning the world into a police state, there is another danger: that it will take a huge catastrophe to draw people’s attention to the problem of lethal biological agents—whether the agents are genetically engineered or not, and whether they’re released intentionally or not. Here I applaud Joy for trying to drum up attention. At the same time, I wonder about the wisdom of trotting out nanotechnology and super-robots and other far-off threats. It allows people to dismiss the whole issue as sci-fi rantings, when in fact the problem, in less exotic form, is upon us.