Future Tense

It’s the 10th Anniversary of Battlestar Galactica

And it’s more relevant than ever.

Tricia Helfer, center, as humanoid Cylon model Number Six in Battlestar Galactica.

Tricia Helfer, center, as humanoid Cylon model Number Six in Battlestar Galactica.

Courtesy The Sci-Fi Channel

Ten years ago this month, a reimagined version of the ’70s science fiction series Battlestar Galactica began as a three-hour miniseries on the Sci-Fi Channel. (This was before the “Syfy” nonsense.) The critically acclaimed show ended up running for four seasons. Many articles and books have already been written about the enduring relevance of Battlestar Galactica’s religious and political themes—at least one of which, the dilemmas associated with a secretive national security state, is just as timely today as it was during the Bush administration.

But another key element of the show—the long-term societal risks associated with the development of intelligent machines—is even more relevant today than it was in 2003.

When Battlestar Galactica began in 2003, the verb “Google” was three years from being added to the Oxford English Dictionary, the underlying technology behind Siri was just starting to be funded by DARPA, and the idea of a computer beating the world champion at Jeopardy! seemed ridiculous, even to most AI experts. In the last decade, though, a lot has changed. Amazon is working on autonomous drones for package delivery; debates concerning driverless cars are mostly about ethics and regulation, not technology; and quick knowledge recall à la Jeopardy! has joined chess as an activity in which humans will likely never again reign supreme.

There is a lot of merit to the common complaint among AI researchers that as soon as AI technology works (GPS navigation algorithms, search engines, speech recognition, etc.), people are no longer willing to consider it “real” AI. Still, I think that overall, history has proved the famous mathematician and computer scientist Alan Turing’s 1950 prediction correct (though he may have gotten the timeline wrong): “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

The premise of Battlestar Galactica—that intelligent machines may, someday in the distant future, wipe out their human creators—is still characterized by some AI scientists as laughably implausible. But many serious thinkers aren’t laughing anymore. Due in part to the persistence of researchers at organizations like the Machine Intelligence Research Institute in California and the Future of Humanity Institute at Oxford, which have been analyzing AI risk scenarios over the past decade, the subject of long-term “existential risk” from AI and how to avoid it is now discussed in polite, if nerdy, company. Most experts remain skeptical, but they increasingly at least acknowledge that the issue is complex. For example, the Association for the Advancement of Artificial Intelligence assembled a “Presidential Panel on Long-Term AI Futures,” and the findings include:

There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity” and also about the large-scale loss of control of intelligent systems. Nevertheless, there was a shared sense that additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion,” and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants.

Such work is ongoing by perhaps a dozen or so researchers worldwide at various institutions and received a boost recently when former President of the Royal Society Martin Rees, philosopher Huw Price, and co-founder of Skype Jaan Tallinn launched the Centre for the Study of Existential Risk at Cambridge to investigate related issues.

These researchers are not exactly thinking about a Battlestar Galactica-type situation in which robots resent their enslavement by humans and rise up to destroy their masters out of vengeance—a fear known as the “Frankenstein complex,” which would happen only if we programmed robots to be able to resent such enslavement. That would be, suffice it to say, quite unintelligent of us. Rather, the modern version of concern about long-term risks from AI, summarized in a bit more detail in this TED talk, is that an advanced AI would follow its programming so exactly and with such capability that, because of the fuzzy nature of human values, unanticipated and potentially catastrophic consequences would result unless we planned ahead carefully about how to maintain control of that system and what exactly we wanted it to do. Experts disagree about whether such an event could happen decades from now or many centuries, but, the argument goes, either way the progression from human-level intelligent to a level far beyond our own would be rapid on a societal timescale due to the various advantages machines have (perfect recall, easily expandable processing capabilities, etc.), and even a slight chance of it happening anytime soon merits serious consideration given the possible risks and opportunities such a technology would present.

None of this is meant to suggest that long-term existential risks from AI are the only, or even the most, pressing issue associated with intelligent machines. Indeed, a whole field of robot ethics has emerged over the past decade, complete with various conferences, focused mostly on ethical, legal, economic, and political issues posed by smart-but-not-that-smart machines. Personally, I find the economic issues most provocative: Artificial intelligence, if realized to an extent anywhere near what scientists consider possible, offers perhaps the only plausible path to a utopian (dystopian?) economy in which “the relief of man’s estate” could be achieved, as Francis Bacon suggested almost half a millennium ago. In the last decade or so, economists from a wide variety of political dispositions (Tyler Cowen, Larry Summers, Paul Krugman, Robin Hanson, Andrew McAfee, and Erik Brynjolfsson, to name a few) have begun to overturn the conventional economic wisdom that technology, over the long run, will necessarily create more jobs than it destroys.

Battlestar Galactica doesn’t offer any ready solutions to these sorts of risks and opportunities. Nor should it be read as a prediction of what will happen as we develop intelligent machines. On this point, I side with John Connor in Terminator, in thinking that “there is no fate but what we make.” Battlestar Galactica contains many subplots about the vast range of possible configurations of humans and technologies—for example, the Cylons on the show have broadly humanlike emotions and values, but also have unique capabilities such as the ability to immerse themselves in virtual reality at a moment’s notice, reminiscent of the visions of modern “transhumanists.” Taking this a step further, the Cylons themselves seem to have outdone humans in terms of maintaining control of their own technology, with Raiders and Centurions that function as, essentially, intelligent slaves that don’t resent their enslavement (unlike the original Cylons), which in turn raises a whole different set of ethical issues. But while the show doesn’t offer answers, Battlestar Galactica, and science fiction more broadly, force us to ask what exactly we want to get out of these technologies we’re creating. In a world where Googling something now means you’re indirectly funding humanoid robotics research (as does paying your taxes), we are, like the inhabitants of the Battlestar Galactica and the rest of the Colonial Fleet hurtling through space, all in this together.

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.