Future Tense

Even If We Don’t Want Enhancement, We Might Not Be Able To Stop It

Still from 2001: A Space Odyssey.
Still from 2001: A Space Odyssey.

Photograph © 1968 Warner Bros. Entertainment Inc. All rights reserved.

Kyle, thanks for the start. I look forward to hearing you folks laugh hysterically (but virtually) as you read my responses.

The “What is transhumanism?” question needs to be asked, but that question can lead one off-scent. I’m willing to stipulate for the court that transhumanism is a philosophy of human progress through primarily technological evolution, which—reflecting attitudes that go back to the Enlightenment—some people are for, and some people are against. If you are skeptical of authority, as many Americans are, you probably like some sort of fairly libertarian approach to enhancement—”it’s my body, and I’ll enhance it if I want!” If you’re more conservative—a touch of Burke, perhaps—you probably are comfortable with the idea that there is a “human nature,” and that, all in all, it would be a good idea to respect that. I think both these positions, reflecting a simple system perspective on a complex adaptive system, probably only capture part of the overall gestalt. I’d rather focus on some additional questions that Kyle’s piece raises for me.

First, though, I would suggest that readers reacquaint themselves with (or meet for the first time) the opening scene of the film 2001: A Space Odyssey. The apes jitter about the monolith that has appeared among them, then one goes off and plays with some bones. Picking one up, he first plays with it, then starts swinging it, then realizes that it is not a bone; it is a weapon. He begins crushing things with it, including other apes. My point: An ape with a bone is a very different creature than an ape with a bone who knows it’s a weapon. The culture and psychology of being human in the world is being connected to such things, whether it’s a bone, a ‘67 Corvette 427 Sting Ray, or Google (if you don’t think the Corvette belongs in there, you weren’t a male adolescent in the late ‘60s). In other words, it is at least arguable that to be human is to be transhuman. To Kyle’s point: It is not that we’re transcending our biology; rather, we’re fulfilling our biology. That argument is over, and has been since hunter-gatherer times. So what has changed?

Two things, I think. First, technological change is far more rapid and pervasive than it has been in the past. We’ve always had technologies that restructured society, culture, economies, and psychology—the steam engine did, railroads did, cars did, airplanes did, and search engines that increasingly substitute for memory do. But depending on how you count, we have five foundational technologies now—nanotech, biotech, robotics, information and communication tech, and applied cognitive science—all of which are not only evolving in interesting and unpredictable ways; they are actually accelerating in their evolution. Moreover, they’re doing that against the backdrop of a world in which systems we’ve always framed as “natural”—the climate, the nitrogen and phosphorous cycles, biology and biodiversity, and others—are increasingly products of human intervention, intentional or not. We are terraforming everything, from our planet to one another … and it’s all connected, of course.

Critically for our purposes, the human being is more and more becoming a design space. Per our ape friend above, this is not new. But the rate of change and concomitant (and mainly still potential) psychological, cultural, and social dislocations are. So in some ways transhumanism is not, as we’d like to frame it, about us; it’s about reality as we know it becoming our design space—including, and especially, that part of reality that we have heretofore ring-fenced as “human.” What we are really seeing is the human equivalent of the Great Divergence, that period in history when economic development touched the lives of some cultures and led them to exponential economic growth, while others remained behind. I’m not saying this is a good thing. But it is where we’re headed. Already, people in developed countries live almost twice as long as others, similarly human, in some developing countries. That’s a pretty profound divergence, and we seem to accept that en passant

So to Kyle’s points. First, I agree that humans like to enhance: We buy stuff that enhances us all the time; we couple into media, computers, and distributed cognition networks such as Google with abandon; and we use enhancement drugs pretty freely when we can get them. Students do, athletes do, workers do. And even where some risk is involved, we enhance just for cosmetic purposes. (I live in Scottsdale, Ariz., and sometimes it seems as if half the local GDP out here comes from cosmetic surgery that enhances virtually every part of the human form—or, if there be too much of a particular form, de-enhances it.) And we choose life-extension enhancements just as freely—that’s what vaccines are, that’s what a lot of modern medical care is. We’re already seriously enhanced over previous generations, and there’s little reason to assume we won’t continue to extend our lifespan if we can—and handle the resource implications the same way we always have, with blood and iron.

The naive libertarian position on enhancement tends to overlook systemic effects. For instance, would the world really be better off if violent tyrants stayed awake longer and concentrated better?  At what point, as Kyle notes, do individual enhancements add up to community-scale shifts, such that if I don’t enhance, I’m the “new subnormal”? There is, of course, the other popular position: that there is a “human nature” that is core to being human, and that “human nature” cannot, and should not, be subject to human design. The trick here is, who gets to define “human nature”? Generally, the folks who use this argument define it very broadly, often based on specific, usually conservative, religious traditions. But, just as in the case of naive libertarianism, this “human nature” position raises difficulties: Since people seem to want to enhance, what I am necessarily arguing if I take the human-nature posture is that authoritarian force—the state, the church—must stifle research on or deployment of enhancements that people would otherwise want to use. And the justification usually implicitly or explicitly rests on my worldview, as opposed to other worldviews that might allow, or even embrace, such enhancement.

I think, however, there’s a more basic question: Even if we don’t like enhancement, can we stop it? If a particular technology gives a nation or a culture an advantage, won’t it over time be developed anyway? Adam Smith famously created blue-collar division of labor with his pin factory. Railroads and other massive industrial capitalism firms created white-collar division of labor with their size and complexity. Human enhancement is simply extending this trend into the realm of the physically and psychologically human to a degree heretofore not feasible—the ultimate division of labor, as it were. And each resultant increase in efficiency and productivity feeds directly into power—economic and military and cultural.

I don’t say that’s good. I don’t say that it won’t cause significant problems, as religions begin to understand that humans have taken their own self-creation into their own too-fallible hands, and we struggle with the prejudice and hate that humans always display, especially under pressure. But I do think human enhancement—transhumanism—is going to be hard to govern and harder to stop. You want the future? You can’t handle the future. But it’s coming anyway.

Over to you, Nick!

Best,
Brad