Future Tense

Why I Don’t Want To Be a Cyborg

Still from The Terminator.

Still of Arnold Schwarzenegger in The Terminator

 

Photograph © 1984 Metro-Goldwyn-Mayer Studios Inc. All Rights Reserved.

Hi Kyle and Brad. Excellent discussion we’ve got going here!

I hope you guys don’t mind if I cast myself in the role of Dr. Who, Captain Kirk, or John Connor—a selfless defender of humanity against threats alien, cyborg, and self-inflicted.

I believe that some of the technologies advocated by transhumanists genuinely threaten our humanity (or values connected with it). The threat is different in kind and more nebulous than that posed by Daleks or Terminators. But it’s no less real.

Do humans want radical enhancement? Let me start by saying why I, as a card-carrying human, am personally opposed to it.

First, I should clarify that I’m not really about telling people they can’t do stuff. Like any old-fashioned liberal, I’m a fan of John Stuart Mill’s harm principle: People should be allowed to do what they want so long as they don’t hurt others. If you want to cover your entire body in tiny tattoos of Elmo and doing so doesn’t hurt anyone else (or infringe copyright laws), then go ahead. I see the philosopher’s principal job as one of informing rather than finger-waving.

Some things that seem extremely attractive at the outset are less so upon closer inspection. I’m concerned that we’re being wowed by enhancement technologies. When I read Kyle’s blog and other explorations of radical enlargement (billion-fold enlargements of our intellects, millennial life expectancies), it’s hard not to feel dazzled. You come away with a one-sided picture of human enhancement. When you’re buying a house, you need more to go on than the glowing appraisal of the agent who’s trying to sell it to you. My book Humanity’s End exposes some of the hidden costs of radical enhancement. (More on those costs in a minute.) Of course, it’s possible that what we gain when we trade in our Alzheimer’s-prone neurons more than makes up for what we lose. But I actually doubt it.

I’m really enjoying Brad and Daniel’s book. (I haven’t quite got to the end yet.) It seems to me a very useful engagement with technologies of human enhancement. Brad, you and Daniel are skeptical about philosophers and philosophical reflection and as a consequence want the debate about human enhancement to have a more practical focus. I’m a professional philosopher, so it’s not surprising that I don’t agree. I think the most interesting and pressing current questions about human enhancement are philosophical. They address human nature and values tied to human nature. These questions can be frustrating partly because the values associated with our humanity are complex. And the notion of turning ourselves into nonhuman beings—transhumans, posthumans, various types of cyborgs—is new, something explored up till now only in the most speculative science fiction. Some people think we’ve been headed down this path for a long while. We’re certainly a technological species. But this means that we’re a species that uses technology—not a species that’s destined (as Ray Kurzweil seems to think) to become technology. I don’t see how we signed up for posthumanity the minute one of us picked up a jaw bone and bashed another one of us on the head, a la Brad’s 2001: A Space Odyssey example. You can renounce a technology that hasn’t been integrated into your brain or body. If you decide that life’s better without a smartphone, you can toss it. When we start technologically upgrading our brains and bodies, renunciation’s much harder. Not all ways of applying technology to human brains and bodies are bad. But they are dangerous in a distinctive kind of way.

So how are they dangerous? Here’s a hypothesis. Enhancement places human meaning under threat. We’re a social species; we connect with other humans. We find meaning in what they do and experience. But we have a weaker connection with radically enhanced beings. Kasparov’s chess play is much more interesting to us than the objectively superior stratagems of Deep Blue. This interest in connecting explains why it’s worth wasting(?) money to put humans on the moon and trying to send them to Mars. We take a pleasure in human experiences that we couldn’t in machines that would gather the same data and collect samples of rock more efficiently. By sending Armstrong and co. to the moon, we as a species get a veridical sense of what it’s like to step on to an alien world. Try as you might, it’s difficult to relate to the information-gathering processes of the Mars Rover.

Kyle diagnoses a tension in my thinking. My 2004 book Liberal Eugenics defends human enhancement. It presents an argument that prospective parents should be granted a limited prerogative to enhance their children’s characteristics. Yet in 2010’s Humanity’s End, I’m opposing enhancement. There’s actually no contradiction here. Human enhancement is like exposure to the sun and quality New Zealand pinot noir. It’s good—but one can overdo it.

Kyle wonders whether I can give a principled answer to the question “How much is too much?” At what age does human life extension flip from being a good thing to being something that’s bad, or at least less good? I have to confess that I can’t answer these questions. But here’s another question that’s difficult in the same kind of way. (Apologies for all the analogies—they’re philosophers’ main resource when addressing issues that are new and difficult to understand.) What’s the right amount of money to spend on melanoma prevention? The two easy answers are clearly wrong. It can’t be zero dollars; melanoma is an extremely nasty cancer. It also can’t be the entire health budget; that would leave no money to spend on anything else. The right answer is somewhere between nothing and everything—it’s difficult to work out exactly where, but it’s certainly worth trying. I feel the same way about the right amount of human enhancement.

Kyle, you think we may have plenty of time. Maybe that’s true. But I agree with Brad about the quickening pace of technological change. Who’s to say that radical human enhancement won’t arrive (much) earlier than scheduled? If it does, then let’s be prepared. Brad, you seem to want to go the other way. You say it’s already too late. I’m not so sure. While humans can’t uninvent a technology, we can be selective in its uses. I can think of quite a few wars since 1945 in which U.S. presidents would have found it really rather convenient to drop a nuclear bomb or two. So I think that we can refrain from certain uses of enhancement technologies if we decide that doing so is important enough. I’m hoping for a future in which people will select very cautiously from what could be a pretty bewildering menu of possible human enhancements.

Kyle, you’re clearly a human-enhancement enthusiast. I see a lot of confidence from transhumanists about the future that radical enhancement will make. Do you see any need for caution in respect of the development or the roll-out of enhancement technologies? Do you see a case for avoiding or limiting certain potentially dangerous avenues of research such as AI, for example?

Brad, you pick up on some really interesting things by grouping technologies together—seeing posthumanizing technologies as continuous with, and not fundamentally different from, earlier technologies. Do you think that your focus on general trends might occasionally overlook important moral differences?

Nick