Kyle, you've alerted me to your excellent blogs. It's been good crossing (philosophical) swords with a smart transhumanist. Brad, you've convinced me that philosophers of enhancement tend toward an overly static view of technology. We tend to downplay network effects. Your book not only brings these out but offers a systematic treatment of them. It's a really distinctive contribution to the enhancement debate, though I wish there were fewer references to Martin Heidegger—Kool-Aid for the (Nazi) cognoscenti.
Perhaps I can make a few final comments on my hopes for our species' future. First, I'm not opposed to change. I accept that we are an evolving species. I can imagine that those who really desired a totally changeless humanity might actually take a keen interest in certain biotechnologies. Geneticists are busy surveying the human species as it stands in the early 21st century—they're providing a picture of shared and variable elements of the human genome. Suppose there were Guardians of the Human Genome (Steven Spielberg, are you reading?) empowered to preserve the human species as of 2011. They'd seek to purge new genetic variants and maintain existing variants at their 2011 frequencies. These imaginary Guardians wouldn't be concerned with any absurd superpowers granted by genetic mutations as per the X-Men movies. They'd view preservation of our current human genetic identity as a valuable end in itself.
Needless to say, this bioconservatism is absurd. Humans will change. But it still makes sense to have preferences about the nature of this change. I think that abrupt, large changes—the kind offered by transhumanists—tend to sever meaningful intergenerational connections. It's one thing to view Granny as hopelessly out of touch with iPads and Xboxes. It's another to view her as an idiot, barely getting by on yesteryear's genetic enhancements. I have similar feelings about my immediate descendants. After going through their teenage stage of despising everything I am and represent, I hope that my kids look back on my life and find just a couple of the things that I did meaningful (wipes away a tear).
Humanity won't last forever. But I don't see why we can't enjoy being human and relating to humans for just a little while longer.
I don't think this involves denying people things that they (really) want. Brad, I wonder about your proposed experiment or giving people the opportunity to choose enhancement and seeing what they do. A few years ago, lending institutions ran an experiment to see whether people with limited financial means would choose adjustable-rate mortgages. Lots did. But were these mortgages really in their best interests? Would people have chosen them if they were better informed? We now know that the answer to these questions for many borrowers was no.
People need information about values on both sides of the enhancement debate. It's relatively easy to see the appeal of life extension and cognitive enhancement. Their benefits are easily recognized and quantified. If Wal-Mart offers to set up in your town's historic district, it has the advantage of offering things whose value we can easily acknowledge and quantify—cheap consumer products and employment. Opponents have a more difficult time saying exactly what would be lost (it's not as if any of the buildings to be demolished appear on UNESCO's list of World Heritage Sites). Values that are difficult to express tend to get elbowed out by values that are easier to state and quantify. I think we need to put in more work expressing them.
Kyle, you think that enhancement of general intelligence may be required for humanity's survival. Such enhancement might protect us from the fate of the dinosaurs—consigned to extinction by a threat that we couldn't even begin to comprehend. Just a speculation—but I wonder whether our response (or lack thereof) to extinction threats might not be a matter of being more intelligent. I was impressed by an argument in David Deutsch's recent book The Beginning of Infinity. According to Deutsch, there are no threats that we, as children of the Enlightenment, are not capable of responding to. We differ from the dinosaurs in having science. Humans are essentially problem-solving creatures. And really—if Bruce Willis is clever enough to deflect a potentially humanity-destroying asteroid in Armageddon, then why shouldn't the rest of us be?
My opposition to certain enhancement technologies is no Luddite rejection of technology. It may seem odd to ally myself with the Trekkies, but I don't see what's wrong with a vision of the future in which beings like Capts. Kirk and Picard—fully recognizable human beings—get to travel to the stars. Surely we all don't have to become Spocks or Datas to qualify for that privilege.