Future Tense

What If Jason Bourne Were a Cyberweapon?

We may soon have to make room in our lives for self-reliant, highly autonomous software.

Closeup of Bee
Thanks to computer-chip advancements, one day this bee may spy on you

Thinkstock.

In the summer of 2010, the Iranian uranium enrichment centrifuges started to malfunction. Eventually, it came to light that the cause was a piece of computer malware called Stuxnet. This little computer bug—small enough to hide on a thumbdrive among PowerPoint presentations and photographs of the kids—managed to wend its way around the world, to the cloistered confines of Iran’s nuclear facilities, and into machines that were “air gapped,” as they say in the business—isolated from any other computer network. To get into one of these machines requires more than the garden variety computer virus—it requires a virus built for maximum effectiveness and autonomy—the Jason Bourne of viruses.

How long will it be before the advanced programming techniques that went into Stuxnet make their way into cyber weapons that boomerang against us?  If anybody knows the answer to that question, they aren’t telling, but it seems a near certainty that, sooner or later, advanced malware will be headed in our direction. At the moment, the United States is highly vulnerable to a malware attack from a Stuxnet-like virus. And some security experts think it could cause as much economic and humanitarian damage as an attack with nuclear weapons. 

The advent of intelligent rogue computer programs such as Stuxnet is only one of the many ways the field formerly known as artificial intelligence is making its way slowly and inexorably into every aspect of life. This is what happens with technology. It starts out as something for an elite corps of supernerds and gradually works its way to the masses, getting cheaper and more powerful.

Artificial intelligence started out decades ago with the promise of general-purpose machines that could think and act like humans. These hopes were dashed, though, in part because the goal was too ambitious—human intelligence is just too subtle, too sophisticated, too poorly understood, to capture in a machine. It failed, too, in part because the hardware was too crude—computers in the ’60s, ’70s, and ’80s were big but not powerful. Now they’re tiny and quite powerful, and getting more so every year.

In the meantime, computer scientists have taken a divide-and-conquer approach to the problem of artificial intelligence. They’ve broken it up into bits and attacked each one separately. This had led to something of a renaissance in the field in the past decade or so. Progress in AI is proceeding in narrow slices of intelligence—speech recognition, text reading, computer vision. The pieces come together in robots, which have sensors to take in what’s going on in the real world, and the ability to move about and to effect physical change on the world. Increasingly, robots interact with people and their daily lives.

The notion of humanoid robots taking over the world is probably silly—certainly when you think of robots in the literal sense, as mechanical creatures with arms and legs that walk around in the streets and sit at a desk in the office cubicle next to yours, competing with you for a promotion. But it becomes less outlandish when you abandon the literal notion of robots as humanoids. In the world we’re now creating, you can think of robots as any artificial intelligence that connects somehow with the physical world. In this respect, Stuxnet was a kind of robot; instead of affecting the physical world through its arms and legs, it did so through the uranium centrifuges of Iran’s nuclear program. A robot is a general-purpose tool made up of different components of narrowly built artificial intelligences.

The first concern that engineers express about new technologies is inevitably privacy, and machine intelligence is no different. Take your iPhone. It is, basically, a computer, and it carries an awful lot of information about you. It’s got a camera, a microphone, a GPS that gives your location. The kind of information it collects is very telling about you and your habits. And the degree to which this information is collected and made available is only going to increase. Many policymakers and computer experts are thinking up ways of using the kind of data that cellphones collect to improve such things as traffic control and public health. If you’re home with the flu, for instance, health officials could use your cellphone data to figure out who got within three feet of you in the past few days, when you were at the peak of contagiousness, and use that information to help contain the spread of infection, perhaps by contacting those people and informing them that they are about to be sick and are unwittingly at that moment spreading infection.

Having your phone provide such information to, say, the CDC may offend your sense of privacy, or perhaps you think it’s worth it for the common good. Regardless, imagine what would happen if a computer virus promulgated by organized crime infected your phone and began to turn its capabilities of information-gathering to nefarious ends.

A sophisticated virus in your cellphone might be able to listen in on all your conversations. It would know your credit card numbers, it would intercept all your emails. Microsoft, Google, and other firms have already developed software that prioritizes email messages by what you’re most likely to be interested in. They can do “sentiment analysis” that scans email messages and finds out how you feel about certain things—whether you think Obama is doing a good job and so forth. The software can read blogs and automatically tag people as leaning to the right or the left on the political spectrum. The software could gather information the way Gallup polls do, but you wouldn’t have to ask people what they thought about certain subjects; the software would be able to tell just by analyzing their emails and blog posts.

Vast resources are now available on individuals from a multitude of sources. If you have a machine intelligence that can draw this information together, you’ve got a mind that embraces the Internet and can sift through it with great speed and pluck out what information it needs. As a storage device, the Internet dwarfs all others. The human brain contains the equivalent of about 3.5 quadrillion bytes of information; the Internet contains 10 times that amount. But what would a robot whose mind embraces the Internet be able to do with all that information? That becomes clear once you start to look at the narrow bits of artificial intelligence that are now emerging.

The ability to speak has improved by leaps and bounds in the past decade. The mechanical voice you hear when calling the phone company to inquire about a bill seems more annoying than potentially destructive, but only if you fail to imagine the day when the ability to program a ma- chine to understand language and speak it is just a tool that you can buy at RadioShack. The voice speaking to you from your iPhone and fielding your queries started out as advanced technology in elite labs a few decades ago, and now it’s part of a common experience.

Where natural-language ability gets dangerous, potentially, is when it gets a bit more powerful, then seeps down to common usage and becomes a relatively inexpensive tool that just about anyone can use. It’s not just the ability to listen to spoken commands; it’s a matter of interpreting human intent and responding in a way that sounds, well, human. If machines can do that well, then it may get harder to tell them from real humans.

A machine that can understand a human by spoken language and can also move easily in the world of humans could do a lot of other humanlike things. You can imagine using computer technology to impersonate a human—perhaps even someone you know. The idea of a computer that can sense human feelings and come up with an appropriate response is a legitimate subject of research these days, and companies such as Google and Microsoft have a keen interest in it. Crude emotive software has already been used with autistic children to bring out hidden social skills. As scientists understand more about how to simulate human emotions, they may increase the ability of computers to pass themselves off as human.

When you consider this possibility, you can imagine the kind of disruption that could ensue in a terrorist plot to use computers to impersonate people. This type of identify theft goes well beyond what we know now. It’s not hard to go from these kinds of identity-theft scenarios to one in which machines (or software, which is a type of machine) orchestrate vast disruptions to our economy. The confusion that would reign if software began impersonating important people, handing out conflicting commands, causing markets to tumble and people to behave in odd ways, adds a whole new dimension to the kind of damage that a Stuxnet-like bot could do to the economy.

The precursors to a machine version of Jason Bourne are drones. They are now used by the military, but what happens when the technology that makes it possible to build a drone becomes commonplace, when it’s easy enough for states or even individuals to acquire the capabilities now reserved for the U.S. military? You could imagine a group like al-Qaida or Aum Shinrikyo or Hamas getting their hands on drones that could take out political targets in Washington, D.C., or New York City.

At the moment, this is a bit far-fetched, but it won’t be for long. Scientists have implanted computer chips in the brains of beetles. The chip is connected to the beetle’s nervous system and sends tiny pulses of electricity that make the beetle turn left or right or fly up or down. The chips also have little radio receivers that put the beetles at the remote command of their researcher overlords. In the lab, they’ve gotten the beetles to zig and zag and do loop-de-loops. It’s not an ominous technology at the moment, but it does give the future of drones a new twist. You could imagine a swarm of locusts that respond to digital control wreaking havoc on crops. You could imagine a swarm of bugs with surveillance cameras in their mandibles fanning out across the land in search of particular people that some remote military power wants to target. Mix some gene manipulation in there and you could envision some kind of venomous creature under remote command that can inflict a paralyzing or fatal dose of poison. And so on.

If drone technology keeps marching along, you can imagine a day when the cops can send bee-like to your house to see if you’re growing marijuana plants or running a crystal meth lab in your basement. The first worry about a new technology that is publicly expressed is often concern for privacy. We don’t tend to tell real horror stories in advance—nobody wants to scare people off new technologies that could be beneficial. But you could just as easily imagine how such mechanical bees could, if in the wrong hands, cause considerable disruption. Drones could wind up having many productive civilian uses, but having self-reliant machines in our lives is going to be an adjustment.

We already have hard enough time living with insects as pests. Imagine when they’re tiny versions of Jason Bourne with wings.

This article is adapted from Fred Guterl’s The Fate of the Species (Bloomsbury). Future Tense is a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.