Debating Extreme Human Enhancement

How To Determine Whether a Proposed Enhancement Is Ethical
What's to come?
Sept. 15 2011 10:09 AM

Debating Extreme Human Enhancement

VIEW ALL ENTRIES

Man being monitored by EEG
A man wearing an EEG brain scanning apparatus plays a pinball game solely through willing the paddles to react with his brain.

Photograph by Sean Gallup/Getty Images.

Nick, Brad, we're narrowing in on something vital about human enhancement. Transhumanism is necessarily speculative, but I think we can pin down why, precisely, this topic, which seems so science-fictional, matters so much for the rather near future.

We agree that we don't want to ban enhancement, nor do we want people to have unregulated, wanton license to recklessly modify themselves. So we are left with the question of how we balance the extremes and avoid the past horrors associated with projects that claimed to know how to make human beings better.

Advertisement

Nick, to answer your question, yes, I most certainly think we need a cautious approach. But, as you both point out, it is near impossible to accurately predict how most enhancement technologies will actually work and/or be used. I think we need an approach that presumes we have no idea what enhancement technologies will come into being. So we need a general philosophy, not a specific policy. Further, we need a philosophy that allows us to craft and adjust policy quickly in response to data, social shifts, and real-world events. Brad (you anti-philosopher, you), it is fitting you brought up Mill and Burke. I think both have something to contribute to our cautious, realistic approach to enhancement.

To determine if a given enhancement is ethical, we must focus on the individual. First ask, does this policy greatly harm individuals or small communities? Then ask, does this policy harm most individuals and groups? If both answers are no, we can move forward. What do I mean by harm? I tend toward Mill here, wherein a harm is a threat to life, limb, and liberty. But I also lean on Burke and know that no policy or technology exists in a vacuum; we must be realistic about what we allow. In short, eugenics and enhancement can and should be among the means we use to better society, but the primary end of policies around allowing or disallowing forms of enhancement should first and foremost be about individual well-being.

But does protecting the individual mean everyone must be enhanced equally? Of course not. Our second point of caution, around fairness and justice, is likely to be addressed by what we might call equality of access over time. Most technology starts expensive and exclusive but over time trends toward affordable and accessible. The rapid spread of cellular technology is a great example. I generally trust our current market economy to provide generally affordable access within a reasonable timeframe.

One might argue that those who have enhancements would act on the economy or legislature to prevent enhancements from being generally available, because lots of enhanced people means my enhancements won't be as valuable. Not so! Enhancement is not a zero-sum game. In fact, one could argue that without general intelligence enhancement humanity cannot survive the future.

Nick, you asked about my concerns around future threats like advanced AI. Let's add nanotech, global warming, nuclear exchange, and external threats like asteroids to the mix. There is a very good chance that our species is, as it stands, not smart enough to deal with these existential threats. If enhancements were zero-sum, we'd be doomed. But they're not. Enhancements, like most technologies, have network effects. Many enhancements, like intelligence and moral enhancement , get more beneficial as more people have them—for society, at least.

Moral enhancement includes modifications such as genetically increasing one's capacity for altruism or willpower; taking mood-lifting drugs that make you less likely to become upset or more likely to bond with those around you; increasing "moral intelligence"—that  is, the part of your brain that allows you to understand why something is right or wrong, with other general cognitive enhancers. I bring them up alongside network effects because some argue that if only a small number of people were morally enhanced, that few would be victimized because they would be too nice and easily coerced. Thus, they argue that only with network effects in play will moral enhancement be beneficial to society. I contest that being moral actually requires more of a person's fortitude, willpower, and mental dexterity as well as requiring a predisposition to more altruistic and beneficent behavior, meaning network effects aren't necessary for morality to be beneficial but that network effects make moral enhancement far more beneficial for everyone. So it is with intelligence enhancement: The benefits of being smarter than everyone by being part of a minority of the enhanced would be insignificant in comparison with the massive benefits each person would receive if the vast majority of humanity had enhanced intelligence. We, as a society, have a major incentive to ensure as many individuals as possible have access to intelligence enhancement, should they desire it.

So, when will we be able to actually use the ethical criteria centered on individual well-being to evaluate these new technologies coming down the pipeline? Brad, you implore Nick and me to look at the world around us—sound advice. Yes, technology overall is advancing at an incredible pace, but it is not as if we will go from Ritalin being a controlled substance today to being able to purchase off-the-shelf neuro-implants tomorrow. We have a rigorous FDA approval process for almost any new biotechnology. The process is not rapid. So, while the FDA assesses scientific safety and efficacy, philosophers, scientists, and (ugh) politicians will be able to assess the pros and cons of that technology and rationally determine which policies to put in place to protect individual well-being before the new enhancement reaches the general market.

Nick, given your aversion to transcending natural human maximums, I'm curious, would you consider it acceptable to make the average and the maximum human capacity for certain traits, like intelligence or longevity, one and the same?

Kyle

TODAY IN SLATE

Politics

Blacks Don’t Have a Corporal Punishment Problem

Americans do. But when blacks exhibit the same behaviors as others, it becomes part of a greater black pathology. 

I Bought the Huge iPhone. I’m Already Thinking of Returning It.

Scotland Is Just the Beginning. Expect More Political Earthquakes in Europe.

Students Aren’t Going to College Football Games as Much Anymore

And schools are getting worried.

Two Damn Good, Very Different Movies About Soldiers Returning From War

The XX Factor

Lifetime Didn’t Think the Steubenville Rape Case Was Dramatic Enough

So they added a little self-immolation.

Medical Examiner

The Most Terrifying Thing About Ebola 

The disease threatens humanity by preying on humanity.

Why a Sketch of Chelsea Manning Is Stirring Up Controversy

How Worried Should Poland, the Baltic States, and Georgia Be About a Russian Invasion?

  News & Politics
Weigel
Sept. 20 2014 11:13 AM -30-
  Business
Business Insider
Sept. 20 2014 6:30 AM The Man Making Bill Gates Richer
  Life
Quora
Sept. 20 2014 7:27 AM How Do Plants Grow Aboard the International Space Station?
  Double X
The XX Factor
Sept. 19 2014 4:58 PM Steubenville Gets the Lifetime Treatment (And a Cheerleader Erupts Into Flames)
  Slate Plus
Slate Picks
Sept. 19 2014 12:00 PM What Happened at Slate This Week? The Slatest editor tells us to read well-informed skepticism, media criticism, and more.
  Arts
Brow Beat
Sept. 20 2014 3:21 PM “The More You Know (About Black People)” Uses Very Funny PSAs to Condemn Black Stereotypes
  Technology
Future Tense
Sept. 19 2014 6:31 PM The One Big Problem With the Enormous New iPhone
  Health & Science
Bad Astronomy
Sept. 20 2014 7:00 AM The Shaggy Sun
  Sports
Sports Nut
Sept. 18 2014 11:42 AM Grandmaster Clash One of the most amazing feats in chess history just happened, and no one noticed.