It’s the 10th Anniversary of Battlestar Galactica, and It’s More Relevant Than Ever

What's to come?
Dec. 23 2013 2:12 PM

It’s the 10th Anniversary of Battlestar Galactica

And it’s more relevant than ever.

Tricia Helfer, center, as humanoid Cylon model Number Six in Battlestar Galactica.
Tricia Helfer, center, as humanoid Cylon model Number Six in Battlestar Galactica.

Courtesy The Sci-Fi Channel

Ten years ago this month, a reimagined version of the ’70s science fiction series Battlestar Galactica began as a three-hour miniseries on the Sci-Fi Channel. (This was before the “Syfy” nonsense.) The critically acclaimed show ended up running for four seasons. Many articles and books have already been written about the enduring relevance of Battlestar Galactica’s religious and political themes—at least one of which, the dilemmas associated with a secretive national security state, is just as timely today as it was during the Bush administration.

But another key element of the show—the long-term societal risks associated with the development of intelligent machines—is even more relevant today than it was in 2003.

When Battlestar Galactica began in 2003, the verb “Google” was three years from being added to the Oxford English Dictionary, the underlying technology behind Siri was just starting to be funded by DARPA, and the idea of a computer beating the world champion at Jeopardy! seemed ridiculous, even to most AI experts. In the last decade, though, a lot has changed. Amazon is working on autonomous drones for package delivery; debates concerning driverless cars are mostly about ethics and regulation, not technology; and quick knowledge recall à la Jeopardy! has joined chess as an activity in which humans will likely never again reign supreme.

Advertisement

There is a lot of merit to the common complaint among AI researchers that as soon as AI technology works (GPS navigation algorithms, search engines, speech recognition, etc.), people are no longer willing to consider it “real” AI. Still, I think that overall, history has proved the famous mathematician and computer scientist Alan Turing's 1950 prediction correct (though he may have gotten the timeline wrong): “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

The premise of Battlestar Galactica—that intelligent machines may, someday in the distant future, wipe out their human creators—is still characterized by some AI scientists as laughably implausible. But many serious thinkers aren't laughing anymore. Due in part to the persistence of researchers at organizations like the Machine Intelligence Research Institute in California and the Future of Humanity Institute at Oxford, which have been analyzing AI risk scenarios over the past decade, the subject of long-term “existential risk” from AI and how to avoid it is now discussed in polite, if nerdy, company. Most experts remain skeptical, but they increasingly at least acknowledge that the issue is complex. For example, the Association for the Advancement of Artificial Intelligence assembled a “Presidential Panel on Long-Term AI Futures,” and the findings include:

There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity” and also about the large-scale loss of control of intelligent systems. Nevertheless, there was a shared sense that additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion,” and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants.

Such work is ongoing by perhaps a dozen or so researchers worldwide at various institutions and received a boost recently when former President of the Royal Society Martin Rees, philosopher Huw Price, and co-founder of Skype Jaan Tallinn launched the Centre for the Study of Existential Risk at Cambridge to investigate related issues.

These researchers are not exactly thinking about a Battlestar Galactica-type situation in which robots resent their enslavement by humans and rise up to destroy their masters out of vengeance—a fear known as the “Frankenstein complex,” which would happen only if we programmed robots to be able to resent such enslavement. That would be, suffice it to say, quite unintelligent of us. Rather, the modern version of concern about long-term risks from AI, summarized in a bit more detail in this TED talk, is that an advanced AI would follow its programming so exactly and with such capability that, because of the fuzzy nature of human values, unanticipated and potentially catastrophic consequences would result unless we planned ahead carefully about how to maintain control of that system and what exactly we wanted it to do. Experts disagree about whether such an event could happen decades from now or many centuries, but, the argument goes, either way the progression from human-level intelligent to a level far beyond our own would be rapid on a societal timescale due to the various advantages machines have (perfect recall, easily expandable processing capabilities, etc.), and even a slight chance of it happening anytime soon merits serious consideration given the possible risks and opportunities such a technology would present.

None of this is meant to suggest that long-term existential risks from AI are the only, or even the most, pressing issue associated with intelligent machines. Indeed, a whole field of robot ethics has emerged over the past decade, complete with various conferences, focused mostly on ethical, legal, economic, and political issues posed by smart-but-not-that-smart machines. Personally, I find the economic issues most provocative: Artificial intelligence, if realized to an extent anywhere near what scientists consider possible, offers perhaps the only plausible path to a utopian (dystopian?) economy in which “the relief of man's estate” could be achieved, as Francis Bacon suggested almost half a millennium ago. In the last decade or so, economists from a wide variety of political dispositions (Tyler Cowen, Larry Summers, Paul Krugman, Robin Hanson, Andrew McAfee, and Erik Brynjolfsson, to name a few) have begun to overturn the conventional economic wisdom that technology, over the long run, will necessarily create more jobs than it destroys.

Battlestar Galactica doesn't offer any ready solutions to these sorts of risks and opportunities. Nor should it be read as a prediction of what will happen as we develop intelligent machines. On this point, I side with John Connor in Terminator, in thinking that “there is no fate but what we make.” Battlestar Galactica contains many subplots about the vast range of possible configurations of humans and technologies—for example, the Cylons on the show have broadly humanlike emotions and values, but also have unique capabilities such as the ability to immerse themselves in virtual reality at a moment's notice, reminiscent of the visions of modern “transhumanists.” Taking this a step further, the Cylons themselves seem to have outdone humans in terms of maintaining control of their own technology, with Raiders and Centurions that function as, essentially, intelligent slaves that don't resent their enslavement (unlike the original Cylons), which in turn raises a whole different set of ethical issues. But while the show doesn't offer answers, Battlestar Galactica, and science fiction more broadly, force us to ask what exactly we want to get out of these technologies we're creating. In a world where Googling something now means you're indirectly funding humanoid robotics research (as does paying your taxes), we are, like the inhabitants of the Battlestar Galactica and the rest of the Colonial Fleet hurtling through space, all in this together.

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.

TODAY IN SLATE

Politics

Blacks Don’t Have a Corporal Punishment Problem

Americans do. But when blacks exhibit the same behaviors as others, it becomes part of a greater black pathology. 

I Bought the Huge iPhone. I’m Already Thinking of Returning It.

Scotland Is Just the Beginning. Expect More Political Earthquakes in Europe.

Lifetime Didn’t Think the Steubenville Rape Case Was Dramatic Enough

So they added a little self-immolation.

Two Damn Good, Very Different Movies About Soldiers Returning From War

Medical Examiner

The Most Terrifying Thing About Ebola 

The disease threatens humanity by preying on humanity.

Students Aren’t Going to College Football Games as Much Anymore, and Schools Are Getting Worried

The Good Wife Is Cynical, Thrilling, and Grown-Up. It’s Also TV’s Best Drama.

  News & Politics
Weigel
Sept. 19 2014 9:15 PM Chris Christie, Better Than Ever
  Business
Moneybox
Sept. 19 2014 6:35 PM Pabst Blue Ribbon is Being Sold to the Russians, Was So Over Anyway
  Life
Inside Higher Ed
Sept. 19 2014 1:34 PM Empty Seats, Fewer Donors? College football isn’t attracting the audience it used to.
  Double X
The XX Factor
Sept. 19 2014 4:58 PM Steubenville Gets the Lifetime Treatment (And a Cheerleader Erupts Into Flames)
  Slate Plus
Slate Picks
Sept. 19 2014 12:00 PM What Happened at Slate This Week? The Slatest editor tells us to read well-informed skepticism, media criticism, and more.
  Arts
Brow Beat
Sept. 19 2014 4:48 PM You Should Be Listening to Sbtrkt
  Technology
Future Tense
Sept. 19 2014 6:31 PM The One Big Problem With the Enormous New iPhone
  Health & Science
Medical Examiner
Sept. 19 2014 5:09 PM Did America Get Fat by Drinking Diet Soda?   A high-profile study points the finger at artificial sweeteners.
  Sports
Sports Nut
Sept. 18 2014 11:42 AM Grandmaster Clash One of the most amazing feats in chess history just happened, and no one noticed.