Frankenstein and the problem with playing God.

The Problem With “Playing God”

The Problem With “Playing God”

The citizen’s guide to the future.
Jan. 11 2017 1:36 PM

The Problem With “Playing God”

The phrase sows moral panic about science without helping to solve it.

Boris Karloff and Colin Clive in Frankenstein.
Boris Karloff and Colin Clive in Frankenstein.

Universal Pictures

Return to Futurography

In Mary Shelley’s classic story Frankenstein, the notorious creature is hiding from human view when he encounters a suitcase in the woods filled with books and clothing. The monster reads Milton’s Paradise Lost and can’t help but compare himself to both Adam and a fallen angel. He recounts his discovery to his maker, the distraught Dr. Victor Frankenstein, with indignation:

Accursed creator! Why did you form a monster so hideous that even YOU turned from me in disgust? God, in pity, made man beautiful and alluring, after his own image; but my form is a filthy type of yours, more horrid even from the very resemblance.

Since its publication nearly 200 years ago, Shelley’s gothic novel has been read as a cautionary tale of the dangers of creation and experimentation. James Whale’s 1931 film took the message further, assigning explicitly the hubris of playing God to the mad scientist. As his monster comes to life, Dr. Frankenstein, played by Colin Clive, triumphantly exclaims: “Now I know what it feels like to be God!”

The admonition against playing God has since been ceaselessly invoked as a rhetorical bogeyman. Secular and religious, critic and journalist alike have summoned the term to deride and outright dismiss entire areas of research and technology, including stem cells, genetically modified crops, recombinant DNA, geoengineering, and gene editing. As we near the two-century commemoration of Shelley’s captivating story, we would be wise to shed this shorthand lesson—and to put this part of the Frankenstein legacy to rest in its proverbial grave.

The trouble with the term arises first from its murkiness. What exactly does it mean to play God, and why should we find it objectionable on its face? All but zealots would likely agree that it’s fine to create new forms of life through selective breeding and grafting of fruit trees, or to use in-vitro fertilization to conceive life outside the womb to aid infertile couples. No one objects when people intervene in what some deem “acts of God,” such as earthquakes, to rescue victims and provide relief. People get fully behind treating patients dying of cancer with “unnatural” solutions like chemotherapy. Most people even find it morally justified for humans to mete out decisions as to who lives or dies in the form of organ transplant lists that prize certain people’s survival over others.

So what is it—if not the imitation of a deity or the creation of life—that inspires people to invoke the idea of “playing God” to warn against, or even stop, particular technologies? A presidential commission charged in the early 1980s with studying the ethics of genetic engineering of humans, in the wake of the recombinant DNA revolution, sheds some light on underlying motivations. The commission sought to understand the concerns expressed by leaders of three major religious groups in the United States—representing Protestants, Jews, and Catholics—who had used the phrase “playing God” in a 1980 letter to President Jimmy Carter urging government oversight. Scholars from the three faiths, the commission concluded, did not see a theological reason to flat-out prohibit genetic engineering. Their concerns, it turned out, weren’t exactly moral objections to scientists acting as God. Instead, they echoed those of the secular public; namely, they feared possible negative effects from creating new human traits or new species. In other words, the religious leaders who called recombinant DNA tools “playing God” wanted precautions taken against bad consequences but did not inherently oppose the use of the technology as an act of human hubris.


What seems to drive most contemporary critics who invoke “playing God” is, likewise, not a religious or moral objection to human beings playing the role of creators, but a fear of the unintended social consequences of scientific discoveries and new technologies. (Such fear finds its footing in historic examples of chemicals deployed as weapons or leaked into drinking water, life-saving drugs that benefitted only the wealthy, and unethical experiments such as the Tuskegee syphilis study.) To urge against playing God, moreover, is to convey a mistrust of scientists—and to criticize their arrogance in the face of the power and unpredictability of nature. The phrase has become a stand-in for these deeper sources of public discomfort with science and technology that are better exposed and examined, rather than cloaked in superstitious warning.

The late evolutionary biologist Stephen Jay Gould once argued that Hollywood had “dumbed down” the subtleties of the original Frankenstein. Whale’s film, he noted, reduces the monster’s murders to biological determinism: It’s because his creator gave him the brain of a former criminal. While the popular movie attributes the creature’s violence purely to nature, the novel makes clear that it comes from the rejection he experiences from Victor Frankenstein and the rest of humanity. The wisest warning that Shelley proffers is not against creating life or imitating God, but rather against neglecting the outcomes of experimentation and discovery. Dr. Frankenstein, abhorred by the hideousness of his creature, cruelly abandons his invention, leaving him without the care and education to become a moral being. His murderous rampage is the result not of having been invented in the first place but of profound neglect.

The lesson for contemporary science, then, is not that we should cease creating and discovering at the boundaries of current human knowledge. It’s that scientists and technologists ought to steward their inventions into society, and to more rigorously participate in public debate about their work’s social and ethical consequences. Frankenstein’s proper legacy today would be to encourage researchers to address the unsavory implications of their technologies, whether it’s the cognitive and social effects of ubiquitous smartphone use or the long-term consequences of genetically engineered organisms on ecosystems and biodiversity.

Some will undoubtedly argue that this places an undue burden on innovators. Here, again, Shelley’s novel offers a lesson. Scientists who cloister themselves as Dr. Frankenstein did—those who do not fully contemplate the consequences of their work—risk later encounters with the horror of their own inventions. (Albert Einstein, who contributed only indirectly to the making of the atomic bomb, tried to avoid this fate in his famous letter to FDR, while J. Robert Oppenheimer grew regretful after making the bomb.) Scientists who do not engage in public debates about their research may face backlash that curtails the technologies themselves, as we’ve seen in European bans on GMOs. Conscientious scientists will take on such social risks as engineering challenges, building safer self-driving cars and algorithms that correct for, rather than replicate or exacerbate, human bias and discrimination.

The environmentalist and futurist Stewart Brand opened the first Whole Earth Catalog in 1968 with this line: “We are as gods and we might as well get good at it.” The statement was a reflection on humanity’s awe-inspiring power to change the planet and the tragedy of the environmental impact it had already wrought. (Brand later wrote that he “stole” the line from the related words of the British anthropologist Edmund Leach.) The mantra “we might as well get good at it” could serve to expand the metaphor and lessons of Frankenstein for our time, offering a ready response the next time “playing God” surfaces in popular dialogue. And whether it’s artificial intelligence, CRISPR, or some other new technology on the horizon, that should happen any minute now.

This article is part of the Frankenstein installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month, we’ll choose a new technology and break it down. Future Tense is a collaboration among Arizona State University, New America, and Slate.

Bina Venkataraman is a fellow at New America and teaches at MIT. She is the director of global policy initiatives at the Broad Institute of Harvard and MIT. Follow her on Twitter.