The Most Terrifying Thought Experiment of All Time

Decoding the tech world.
July 17 2014 2:47 PM

The Most Terrifying Thought Experiment of All Time

Why are techno-futurists so freaked out by Roko’s Basilisk?

The Ring.
Before you die, you see Roko’s Basilisk. It's like the videotape in The Ring.

Still courtesy of DreamWorks LLC

WARNING: Reading this article may commit you to an eternity of suffering and torment.

Slender Man. Smile Dog. Goatse. These are some of the urban legends spawned by the Internet. Yet none is as all-powerful and threatening as Roko’s Basilisk. For Roko’s Basilisk is an evil, godlike form of artificial intelligence, so dangerous that if you see it, or even think about it too hard, you will spend the rest of eternity screaming in its torture chamber. It's like the videotape in The Ring. Even death is no escape, for if you die, Roko’s Basilisk will resurrect you and begin the torture again.

David Auerbach David Auerbach

David Auerbach is a writer and software engineer based in New York. His website is http://davidauerba.ch.

Are you sure you want to keep reading? Because the worst part is that Roko’s Basilisk already exists. Or at least, it already will have existed—which is just as bad.

Advertisement

Roko’s Basilisk exists at the horizon where philosophical thought experiment blurs into urban legend. The Basilisk made its first appearance on the discussion board LessWrong, a gathering point for highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality. LessWrong’s founder, Eliezer Yudkowsky, is a significant figure in techno-futurism; his research institute, the Machine Intelligence Research Institute, which funds and promotes research around the advancement of artificial intelligence, has been boosted and funded by high-profile techies like Peter Thiel and Ray Kurzweil, and Yudkowsky is a prominent contributor to academic discussions of technological ethics and decision theory. What you are about to read may sound strange and even crazy, but some very influential and wealthy scientists and techies believe it.

One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?

You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:

Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.

Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate.

Some background is in order. The LessWrong community is concerned with the future of humanity, and in particular with the singularity—the hypothesized future point at which computing power becomes so great that superhuman artificial intelligence becomes possible, as does the capability to simulate human minds, upload minds to computers, and more or less allow a computer to simulate life itself. The term was coined in 1958 in a conversation between mathematical geniuses Stanislaw Ulam and John von Neumann, where von Neumann said, “The ever accelerating progress of technology ... gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Futurists like science-fiction writer Vernor Vinge and engineer/author Kurzweil popularized the term, and as with many interested in the singularity, they believe that exponential increases in computing power will cause the singularity to happen very soon—within the next 50 years or so. Kurzweil is chugging 150 vitamins a day to stay alive until the singularity, while Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes who want to live forever. “If you don't sign up your kids for cryonics then you are a lousy parent,” Yudkowsky writes.

If you believe the singularity is coming and that very powerful AIs are in our future, one obvious question is whether those AIs will be benevolent or malicious. Yudkowsky’s foundation, the Machine Intelligence Research Institute, has the explicit goal of steering the future toward “friendly AI.” For him, and for many LessWrong posters, this issue is of paramount importance, easily trumping the environment and politics. To them, the singularity brings about the machine equivalent of God itself.

Yet this doesn’t explain why Roko’s Basilisk is so horrifying. That requires looking at a critical article of faith in the LessWrong ethos: timeless decision theory. TDT is a guideline for rational action based on game theory, Bayesian probability, and decision theory, with a smattering of parallel universes and quantum mechanics on the side. TDT has its roots in the classic thought experiment of decision theory called Newcomb’s paradox, in which a superintelligent alien presents two boxes to you:

14717_BIT_Paradox2

The alien gives you the choice of either taking both boxes, or only taking Box B. If you take both boxes, you’re guaranteed at least $1,000. If you just take Box B, you aren’t guaranteed anything. But the alien has another twist: Its supercomputer, which knows just about everything, made a prediction a week ago as to whether you would take both boxes or just Box B. If the supercomputer predicted you’d take both boxes, then the alien left the second box empty. If the supercomputer predicted you’d just take Box B, then the alien put the $1 million in Box B.

TODAY IN SLATE

Foreigners

The World’s Politest Protesters

The Occupy Central demonstrators are courteous. That’s actually what makes them so dangerous.

The Religious Right Is Not Happy With Republicans  

The XX Factor
Oct. 1 2014 4:58 PM The Religious Right Is Not Happy With Republicans  

The Feds Have Declared War on Encryption—and the New Privacy Measures From Apple and Google

The One Fact About Ebola That Should Calm You

It spreads slowly.

These “Dark” Lego Masterpieces Are Delightful and Evocative

Crime

Operation Backbone

How White Boy Rick, a legendary Detroit cocaine dealer, helped the FBI uncover brazen police corruption.

Politics

Talking White

Black people’s disdain for “proper English” and academic achievement is a myth.

Activists Are Trying to Save an Iranian Woman Sentenced to Death for Killing Her Alleged Rapist

Piper Kerman on Why She Dressed Like a Hitchcock Heroine for Her Prison Sentencing

  News & Politics
Politics
Oct. 1 2014 7:26 PM Talking White Black people’s disdain for “proper English” and academic achievement is a myth.
  Business
Moneybox
Oct. 1 2014 2:16 PM Wall Street Tackles Chat Services, Shies Away From Diversity Issues 
  Life
Outward
Oct. 1 2014 6:02 PM Facebook Relaxes Its “Real Name” Policy; Drag Queens Celebrate
  Double X
The XX Factor
Oct. 1 2014 5:11 PM Celebrity Feminist Identification Has Reached Peak Meaninglessness
  Slate Plus
Behind the Scenes
Oct. 1 2014 3:24 PM Revelry (and Business) at Mohonk Photos and highlights from Slate’s annual retreat.
  Arts
Brow Beat
Oct. 1 2014 9:39 PM Tom Cruise Dies Over and Over Again in This Edge of Tomorrow Supercut
  Technology
Future Tense
Oct. 1 2014 6:59 PM EU’s Next Digital Commissioner Thinks Keeping Nude Celeb Photos in the Cloud Is “Stupid”
  Health & Science
Science
Oct. 1 2014 4:03 PM Does the Earth Really Have a “Hum”? Yes, but probably not the one you’re thinking.
  Sports
Sports Nut
Oct. 1 2014 5:19 PM Bunt-a-Palooza! How bad was the Kansas City Royals’ bunt-all-the-time strategy in the American League wild-card game?