Debating Extreme Human Enhancement
This conversation is part of a Future Tense, a partnership between Slate, the New America Foundation, and Arizona State. On Thursday, Sept. 15, Future Tense will be hosting an event in Washington, D.C., on the boundaries between humans and machines. RSVP here to join us for "Is Our Techno-Human Marriage in Need of Counseling?"
Photograph by Sean Gallup/Getty Images.
Nick, Brad, we're narrowing in on something vital about human enhancement. Transhumanism is necessarily speculative, but I think we can pin down why, precisely, this topic, which seems so science-fictional, matters so much for the rather near future.
We agree that we don't want to ban enhancement, nor do we want people to have unregulated, wanton license to recklessly modify themselves. So we are left with the question of how we balance the extremes and avoid the past horrors associated with projects that claimed to know how to make human beings better.
Nick, to answer your question, yes, I most certainly think we need a cautious approach. But, as you both point out, it is near impossible to accurately predict how most enhancement technologies will actually work and/or be used. I think we need an approach that presumes we have no idea what enhancement technologies will come into being. So we need a general philosophy, not a specific policy. Further, we need a philosophy that allows us to craft and adjust policy quickly in response to data, social shifts, and real-world events. Brad (you anti-philosopher, you), it is fitting you brought up Mill and Burke. I think both have something to contribute to our cautious, realistic approach to enhancement.
To determine if a given enhancement is ethical, we must focus on the individual. First ask, does this policy greatly harm individuals or small communities? Then ask, does this policy harm most individuals and groups? If both answers are no, we can move forward. What do I mean by harm? I tend toward Mill here, wherein a harm is a threat to life, limb, and liberty. But I also lean on Burke and know that no policy or technology exists in a vacuum; we must be realistic about what we allow. In short, eugenics and enhancement can and should be among the means we use to better society, but the primary end of policies around allowing or disallowing forms of enhancement should first and foremost be about individual well-being.
But does protecting the individual mean everyone must be enhanced equally? Of course not. Our second point of caution, around fairness and justice, is likely to be addressed by what we might call equality of access over time. Most technology starts expensive and exclusive but over time trends toward affordable and accessible. The rapid spread of cellular technology is a great example. I generally trust our current market economy to provide generally affordable access within a reasonable timeframe.
One might argue that those who have enhancements would act on the economy or legislature to prevent enhancements from being generally available, because lots of enhanced people means my enhancements won't be as valuable. Not so! Enhancement is not a zero-sum game. In fact, one could argue that without general intelligence enhancement humanity cannot survive the future.
Nick, you asked about my concerns around future threats like advanced AI. Let's add nanotech, global warming, nuclear exchange, and external threats like asteroids to the mix. There is a very good chance that our species is, as it stands, not smart enough to deal with these existential threats. If enhancements were zero-sum, we'd be doomed. But they're not. Enhancements, like most technologies, have network effects. Many enhancements, like intelligence and moral enhancement , get more beneficial as more people have them—for society, at least.
Moral enhancement includes modifications such as genetically increasing one's capacity for altruism or willpower; taking mood-lifting drugs that make you less likely to become upset or more likely to bond with those around you; increasing "moral intelligence"—that is, the part of your brain that allows you to understand why something is right or wrong, with other general cognitive enhancers. I bring them up alongside network effects because some argue that if only a small number of people were morally enhanced, that few would be victimized because they would be too nice and easily coerced. Thus, they argue that only with network effects in play will moral enhancement be beneficial to society. I contest that being moral actually requires more of a person's fortitude, willpower, and mental dexterity as well as requiring a predisposition to more altruistic and beneficent behavior, meaning network effects aren't necessary for morality to be beneficial but that network effects make moral enhancement far more beneficial for everyone. So it is with intelligence enhancement: The benefits of being smarter than everyone by being part of a minority of the enhanced would be insignificant in comparison with the massive benefits each person would receive if the vast majority of humanity had enhanced intelligence. We, as a society, have a major incentive to ensure as many individuals as possible have access to intelligence enhancement, should they desire it.
So, when will we be able to actually use the ethical criteria centered on individual well-being to evaluate these new technologies coming down the pipeline? Brad, you implore Nick and me to look at the world around us—sound advice. Yes, technology overall is advancing at an incredible pace, but it is not as if we will go from Ritalin being a controlled substance today to being able to purchase off-the-shelf neuro-implants tomorrow. We have a rigorous FDA approval process for almost any new biotechnology. The process is not rapid. So, while the FDA assesses scientific safety and efficacy, philosophers, scientists, and (ugh) politicians will be able to assess the pros and cons of that technology and rationally determine which policies to put in place to protect individual well-being before the new enhancement reaches the general market.
Nick, given your aversion to transcending natural human maximums, I'm curious, would you consider it acceptable to make the average and the maximum human capacity for certain traits, like intelligence or longevity, one and the same?