Future Tense

What Victor Frankenstein Got Wrong

A modern bioengineer explains how fiction and reality both demand open science.

An Anopheles stephensi mosquito.
An Anopheles stephensi mosquito. If properly harnassed, “gene drive” technology could let us stop mosquitoes from spreading malaria.

CDC

On Thursday, Feb. 2, Future Tense—a partnership of Slate, New America, and Arizona State University—will hold an event called “The Spawn of Frankenstein” in Washington to discuss the novel’s legacy. For more information and to RSVP, visit the New America website.

It seems banal to state that the future of our civilization will be determined by the technologies we invent and the wisdom with which we deploy them. That banality may explain why we collectively spend so little time concerned with how best to proceed. Literature offers many cautionary tales, but we seldom pause to re-evaluate them in light of modern capabilities. Early in Mary Shelley’s Frankenstein, perhaps the first science-fiction novel, Victor Frankenstein issues a personal warning:

Learn from me, if not by my precepts, at least by my example, how dangerous is the acquirement of knowledge and how much happier that man is who believes his native town to be the world, than he who aspires to become greater than his nature will allow.

We are fortunate beyond measure that so many great scientists ignored this advice. From vaccines and antibiotics to abundant food and energy, technological advances have liberated most of humanity from the worst of disease and want. Our gains may be fragile, requiring a steady diet of new discoveries to maintain, yet they are real and deserve celebration.

Still, it is hubris to assume that past triumphs guarantee any measure of future success. As a research scientist currently working on a controversial new approach to ecological engineering, I recently reread Frankenstein and was surprised by its depth. Far from the popular characterization of “scientist meddles with life, tragedy ensues,” Victor Frankenstein is a deep yet flawed human being whose mistakes are relevant for researchers today. Whether or not it was Shelley’s intent, the novel’s message is clear: Wisdom is knowing whether, when, and how to develop new technologies—and when to lock them away for as long as we can.

In 2013, after helping to develop CRISPR—a molecular scalpel for precisely cutting and therefore editing any DNA sequence—I realized that we could use it to construct “gene drive

systems to alter the traits of wild populations. Here’s how it works: Instead of just using CRISPR as a tool to edit DNA once, we can program the organism’s genome to do the editing on its own, then let it mate with a wild counterpart. In the offspring, CRISPR will convert the original DNA sequence inherited from the wild parent to the new, edited version. With two copies, the next generation is guaranteed to inherit the edit—as is the next, and the next, and the next. Think of it as a find-and-replace for entire wild populations. The implications could be profound: As Austin Burt of Imperial College London first predicted more than a decade ago, learning to harness gene drive could let us stop mosquitoes from spreading malaria.

No prose can do justice to the sheer elation of discovery, though Shelley makes a worthy attempt. As Victor Frankenstein relates, “I trod heaven in my thoughts, now exulting in my powers, now burning with the idea of their effects.” Such was my experience when I realized the implications of merging CRISPR with gene drive to create a tool that could help eradicate diseases, save endangered species, and obviate the need for pesticides—a way to solve ecological problems with biology, not bulldozers.

But the thrill of invention can amount to a siren song. As Robert Oppenheimer famously said, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.”

It need not be so.

Most technologies require substantial resources to deploy and impact the world—think of the steam engine, electricity, and vaccines. Those few that do not, such as software, still typically require voluntary adoption by many other people. These constraints normally give individuals an opportunity to selectively opt out, as do the Amish, and more broadly provide society with a chance to consider the ramifications.

With CRISPR-based gene drive, however, anyone with the right training could conceivably alter whole ecosystems unless their creation is actively countered and overwritten. In the worst-case scenario, an unopposed “global” gene drive system could spread through every population of the target species in the world, potentially affecting countless people without their consent. While most genetic changes would have no ecological effects whatsoever, we can’t know for sure without testing them in small areas (i.e., without a global gene drive), and individuals acting on their own won’t have run such tests. Imagine if someone in, say, New Zealand—even a would-be do-gooder—released a “global” gene drive designed to suppress or remove an invasive rat population by spreading infertility. Even if it worked well, possibly saving many endangered species, the construct wouldn’t stay in that area. It would spread (or be spread) by ship or plane to Eurasia and likely collapse the native rat populations there, with unknown ecological consequences. (This is a major reason why gene drive researchers have emphasized laboratory safeguards, and also why my group is working to develop daisy drive, a form of CRISPR-based drive system that runs out of genetic fuel and stops).

Thankfully, few people have the necessary skills and knowledge to insert genes into organisms that reproduce sexually, and the vast majority of them work with laboratory fruit flies—hardly a keystone species. After three years of evaluating potential dangers, my current best assessment is that gene drive is unlikely to present much of a biosecurity threat, or even a major ecological hazard. Simply put, drive systems spread slowly, can be unfailingly and cheaply detected, and are easily overwritten and therefore countered.

But it would be sheerest hubris to assume those are the only concerns. If anyone unilaterally set in motion a process that might alter an entire wild species, even if it didn’t work well or had no measurable effects, the consequences for public trust in scientists and governance could be devastating—perhaps enough to cripple research that our civilization desperately needs.

It’s important to note, however, that there is a major difference between gene drive and the fictional tale of Mary Shelley: Victor Frankenstein did not ask anyone for advice.

Many of my colleagues and mentors—particularly George Church, Kenneth Oye, Jeantine Lunshof, and James P. Collins—have worked with me and others to examine gene drives and their potential consequences. Along with numerous other experts from diverse fields, including representatives from environmental organizations, we discussed the implications, risks, and benefits, ultimately concluding that it was not only safe to tell the world about our discovery, but ethically necessary to do so before anyone tested it in the laboratory.

Had it been up to my judgment alone, things may have gone as badly as in fiction.

Even fiction could have been worse. If Victor Frankenstein had created a fertile mate for his creature, it would surely have represented an existential threat to humanity—likely the first such technological example in literature. Had he shared the secret with others, someone else would surely have done something equivalent. (When asked about “the particulars of his creature’s formation,” Victor’s response is, “Are you mad, my friend?”) Yet with that same technology, humanity might also have abolished disease and aging, hunger and want, perhaps rendering us invulnerable to that type of existential risk. Evidently the fictional inventor didn’t think of that. And even if he had, his lone evaluation of the risks and benefits would not be nearly as accurate as if he had consulted with a diverse group.

Technological hubris is ignoring the suggestions of others—even if only by neglecting to inform them of an advance. It is most common among those suffering from the curse of knowledge: scientists.

That’s why my colleagues and I seek to ensure that all gene drive research takes place in the open light of day. People deserve a voice in decisions that might affect them, and building gene drive systems behind closed doors denies them that opportunity. Even apart from the moral hazard, keeping research plans secret—as the current scientific enterprise incentivizes us to do—is appallingly inefficient and outright dangerous. It doesn’t just slow the rate of advances, thereby jeopardizing our ability to sustain our civilization; it practically invites global catastrophic risk. No one, be they science-fiction author or Austin Burt himself, anticipated a form of gene drive as versatile as is theoretically enabled by CRISPR. What else have we not anticipated that this time might be truly dangerous? And given this possibility, why on earth do we send out small teams of ultra-specialists, mostly working on their own and in secret, to find and open every technological box they can? Better to default to open research plans, enabling diverse teams to evaluate new advances, implementing measures to obscure and counter anything deemed truly dangerous, than to proceed blindly.

Of course, any wholesale restructuring of the scientific enterprise would also be an act of reckless hubris. My personal rule of ecological engineering: start local and scale up only if warranted. In this case, the best “local test” is the field of gene drive research. Scientific journals, funders, policymakers, and intellectual property holders should change the incentives to ensure that all proposed gene drive experiments are open and responsive.

The message from fiction and reality is clear: Scientists should hold themselves morally responsible for all consequences of their work. The least we can do is muster enough humility to ask for help.

*Update, Jan. 26, 2017: This piece was updated to include a picture of an Anopheles mosquito instead of an Aedes aegypti mosquito. The Anopheles mosquito can transmit malaria.

This article is part of the Frankenstein installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month, we’ll choose a new technology and break it down. Future Tense is a collaboration among Arizona State University, New America, and Slate.