What artificial intelligence researchers can learn from Frankenstein.

What A.I. Researchers Can Learn From Frankenstein

What A.I. Researchers Can Learn From Frankenstein

The citizen’s guide to the future.
Jan. 23 2017 7:06 AM

Dr. Frankenstein’s Three Big Mistakes

What artificial intelligence researchers can learn from Frankenstein.

Elon Musk with Frankenstein.

Photo illustration by Slate. Images by Hector Guerrero/Getty Images, The Man in Question/Wikimedia Commons and EnginKorkmaz/Thinkstock


In 2015, Elon Musk announced the creation of OpenAI, a nonprofit research company he says is intended “to build safe AI, and ensure that AI’s benefits are as widely and evenly distributed as possible.” As the name suggests, the founding principle of OpenAI is essentially democratic: to make the findings of artificial intelligence researchers available to all.

Detractors believe it is unwise to allow open access to A.I. research. What will happen if this research gets into the hands of a “Dr. Evil,” a tyrant, a fanatic, or a lunatic? Others worry that A.I. will surpass its human handlers and “turn itself loose on the world.”


But both A.I. researchers and those who worry about A.I. should look not to Terminator for guidance. Instead, they should read another classic work of science fiction: Mary Shelley’s novel Frankenstein. The story of a scientist’s ill-fated invention of a self-directing, artificial human being demonstrates that the best protection against an evil scientist is a good scientist, and the best way to solve problems is to invite the advice of other researchers. In Shelley’s tale, Victor Frankenstein, the brilliant but shortsighted scientist, made three key mistakes that could easily have been prevented by an organization like OpenAI.

1. Isolation: One of Frankenstein’s gravest errors was keeping his research a secret from others. He worked in isolation, hiding his progress from his teacher and his fellow scientists. Thus, when his creature went on a murderous rampage, killing all of those close to him, there was no one to help Frankenstein destroy the creature or, at the very least, modify the creature’s behavior. When crisis struck, there was no one to whom Frankenstein could turn for guidance. And when Frankenstein died, his creature continued to roam the earth, enraged and embittered, poised to wreak more damage. If Frankenstein had been a member of a research group, his fellow scientists could have stepped in to help control the creature and to support Frankenstein in the challenges that came to light the moment the creature attained autonomy. As it was, Frankenstein failed to manage his invention and succumbed to the perils of the isolated researcher. He died of exhaustion and despair—a tragedy that could have been prevented by a group like OpenAI, which encourages scientists not only to share their findings, but to draw support from one another.

2. Neglecting his creation: When Frankenstein first beheld his creation, he was overwhelmed with remorse and disgust. He fled from its presence, giving up the opportunity to supervise, nurture, and educate his invention. In today’s terms, these practices are known as “reinforcement learning” and “scalable oversight,” but in essence, they add up to the same scientific principle: The inventor must carefully observe, train, and oversee his or her invention. This should have been particularly important for Frankenstein, who had designed his creature to be as human as possible, with the supreme objective of finding love and companionship. When the creature woke to his new life and found himself alone, he experienced this as a crushing blow, a nearly fatal rejection on the part of his “father,” and immediately set forth to find Frankenstein.

As Frankenstein regarded love as a purely positive goal, beneficial for both human beings and his creature, it did not occur to him to include a fail-safe mechanism, such as a shut-off switch. Rather, he was proud that he had designed his creation to be a free-standing and self-propelled organism. Nor did he properly consider what today’s scientists would term his creature’s reward function—the pursuit of a goal, no matter the side effects. For example, a robot with a reward function of moving an object from point A to point B will often break or destroy objects in its way unless it has been programmed properly. Along the same lines, the creature’s single-minded pursuit of love meant the ruthless destruction of all that stood in the way of loving and being loved, including swift and bloody reprisals against those repulsed by him. When the creature met innocent villagers who were terrified by his appearance and took action to defend themselves, he murdered them. He burned down a family’s house because they repelled his advances.


3. Poor preparation of society and inadequate funding: Society reacted to Frankenstein’s creation with fear and hatred—an environmental obstacle that prevented the creature from achieving its goal of love with tragic results for all involved. Due to a lack of funding, Frankenstein had to rely on substandard materials in the manufacture of his creature. Under the cover of darkness, he dug up graves and stole the body parts of corpses. Unable to find a single dead body that had not at least partially decayed, he had to use parts from different corpses. As a result, the creature’s limbs did not match and its legs and arms were in different stages of decomposition, which did not create a pleasing aesthetic. Also, the creature gave off a strong odor of decay.

However, perhaps the central reason Frankenstein’s creature was greeted with such antipathy was that he too closely resembled a human being—an engineering feat with unhappy consequences. Today, we would call this the uncanny valley effect, when an artificial agent looks too human. Zombies and androids are regarded as menacing, while R2-D2 and C-3PO are seen as adorable precisely because of a central contradiction: They do not look human and yet they are all too human in their frailty and amusing idiosyncrasies.

Frankenstein’s creature, on the other hand, was ugly, smelly, and frighteningly strong. His head was somewhat square, thanks to the poor quality of the skull the inventor was forced to use. This was a sad eventuality for the creature, who, on many occasions, wept about the profound revulsion others felt for him. The creature begged Frankenstein to create another such monster, threatening him with the destruction of the human race. Without anyone to consult for guidance, Frankenstein began construction, but before he completed the second creature, he destroyed her out of fear that she would reproduce with the first creature and annihilate humanity. In response, the creature murdered Frankenstein’s own bride, and the creation and creator became locked in a struggle that ultimately cost Frankenstein his life.

Which brings us back to artificial intelligence and, more specifically, OpenAI. Such a highly funded endeavor possesses the resources to help scientists design their creations with optimal materials and design features. It encourages researchers to work together, not in competition or in isolation, so they might advise one another. And when something goes wrong, a brain trust can work together to solve it.

Despite Mary Shelley’s limited scientific expertise, she anticipated many of the challenges A.I. faces today. Few engineers can understand or predict the results of their creations. For instance, Microsoft designed a chatbot, an A.I. system called Tay.ai, “to entertain and engage” 18- to 24-year-olds. But in less than 24 hours, hostile users turned Tay into a troll, spouting racist, misogynistic, and anti-Semitic remarks, an experiment in A.I. gone awry. Artificial intelligence isn’t likely to kill us all—but the more people work on the problem, the more the odds go down. Frankenstein’s creature did not have to be a blight on society. He devolved into a monster of revenge because he was abandoned by his creator.

This article is part of the Frankenstein installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month, we’ll choose a new technology and break it down. Future Tense is a collaboration among Arizona State University, New America, and Slate.