Do We Need an “Artificial Intelligence Nanny”?

Do We Need an “Artificial Intelligence Nanny”?

Do We Need an “Artificial Intelligence Nanny”?

Future Tense
The Citizen's Guide to the Future
Aug. 17 2011 12:21 PM

Do We Need an “Artificial Intelligence Nanny”?

Over on h+ magazine, a publication dedicated to far-term technology, Ben Goertzel proposes the idea of an “artificial intelligence nanny”: “a powerful yet limited AGI (Artificial General Intelligence) system, with the explicit goal of keeping things on the planet under control while we figure out the hard problem of how to create a probably positive Singularity.” (The Singularity, for those not fluent in futurism, refers to the creation of some beyond-human intelligence.) Goertzel wants us—or rather, the Singularity crowd—to consider inventing a protector to keep humans from destroying ourselves with synthetic biology, nanotech, or some malevolent artificial intelligence; he also thinks that this AI nanny could save us from a evil super-team of terrorists and tech geniuses bent on destruction. He admits that creating an AI nanny would be technologically challenging—at the moment, impossible—but says that the tools to do so would emerge in tandem with the dangerous innovations that would require such a benevolent baby-sitter.

When you put it that way, doesn’t the future seem terrifying?

Advertisement

Read more on h+.

 

Future Tense is a partnership of SlateNew America, and Arizona State University.

Torie Bosch is the editor of Future Tense, a project of Slate, New America, and Arizona State that looks at the implications of new technologies.