Future Tense

Can Brain Research Keep Us Safe?

Post-9/11 “neurosecurity” research is very cool, but it holds more promise than results.

Research on neuroscience in respect to national security has surged since 9/11

Human conflict is often associated with the emergence of a new science or technology. The Civil War’s Gatling gun changed battlefield tactics and led to modern machine guns, like the M61, that are still in use. World War I’s chemical weapons proved difficult to manage in the field, provoked nearly universal revulsion, and became the object of international law and a remarkably successful arms control regime. World War II’s atomic bomb was the punctuation mark at the end of the war in the Pacific.

A decade after 9/11 and the anthrax attacks, what will be the signature technology of the war on terrorism? It could well be connected to the brain. Anti-terrorism efforts have included a substantial investment in neuroscience research. The projects in progress have led to a great deal of soul-searching and wide-ranging ethical debates about the long-term. For example, President Bush’s bioethics council expressed concern about the role of human enhancement technologies in the military, while the National Research Council published a report on emerging cognitive neuroscience and its implications for national security.

Missing in these discussions is the short term: What are the plausible prospects for neuroscience in the national security context, and what are the challenges? Whether these funds will lead to innovations that are of use in real-world conflict, or mainly some important laboratory science, remains to be seen.

The drawing board currently holds a lot of compelling and potentially threatening applied brain science. In 2009 the National Research Council identified a handful of novel threats to U.S. national security due to “technology surprise.” Some of these threats, like cyberterrorism, have become familiar. Another, less familiar concern was neuroscience. The list of neurotechnologies that could create new security advantages as well as new problems is long and diverse: “super-soldiers” who can stay awake and alert for days at a time; brain imaging for detecting deception; implantable brain chips to improve memory and learning; brain-machine interfaces; substances that could aid in interrogation; and genetic information about adversaries that might inform defense planning.

In my 2006 book Mind Wars, I explored the idea that the study of the brain might become an issue for national security. Since then, defense and counterintelligence officials have become even more concerned about this possibility. It’s reflected in the measurable growth of research on neuroscience and terrorism since 2001, a “9/11 effect.” I reviewed the publication rate of science articles since the Sept. 11 attacks, focusing on keywords that included terror or national security in conjunction with words like brain research or mind reading. The results were striking: From 1991 to 2001 there were 25 articles in recognized science journals with these keywords; from 2001 to 2011 there were 147.

Based on public budget numbers, it appears that the Pentagon is now spending hundreds of millions of dollars on neuroscience-related projects. Among the participants is the Pentagon’s science agency, the Defense Advanced Research Projects Agency. A visit to the DARPA Defense Science Office website yields information about at least four programs that relate to neuroscience: Accelerated Learning, Cognitive Technology Threat Warning Systems, Education Dominance, and Neurotechnology for Intelligence Analysts. Appropriately, DARPA is responding to a reality of the 21st-century war on terror: Even nonstate actors could take advantage of cutting-edge science.

There’s no doubt that we will be seeing a lot more applied brain research in the national security context. In that process, it is easy to be dazzled by the science fiction-like scenarios of brain-machine interfaces and implanted brain chips. But neuroscientists themselves vigorously disagree about how far the science can go, and the truth is that the prospects for taking many of these ideas outside the laboratory are so far inconclusive at best.

For example, patients with implanted electrodes who have lost the use of their limbs can now control computer cursors, on-off switches, and robotic arms. But the experiments pose risks as well as potential benefits like more alert warfighters or thoroughly networked human-machine systems.  The technology is clumsy, and the effects are of uncertain duration. Soldiers won’t be running remote robots with their brains alone anytime soon, nor is it clear that that would be superior to the way drones are run now. There is a mixture of high hopes and hype for neuroscience-based lie detectors, but no responsible neuroscientist thinks these systems are anywhere close to legal admissibility, nor is it clear that people can’t be trained to evade detection or even that we can always agree on what counts as a lie. For all the talk about cognitive enhancement through new pharmaceuticals, there’s no evidence that performance on, say, an IQ test can be improved by drugs, though some drugs like Provigil may keep you awake and alert longer than amphetamines or caffeine. The trouble is, they can keep your competitor or adversary awake longer, too.

So what’s the rationale for national security agencies spending our tax dollars on neuroscience? Although the short-term, real-world possibilities have been overhyped, hundreds of millions of dollars is barely a drop in the bucket of U.S. defense R&D. Neuroimaging devices like functional magnetic resonance imaging have opened up remarkable opportunities to learn about the brain, and they are only the beginning. Other fields, like genomics, are converging with neuroscience to provide new opportunities for cross-fertilization. In the age of do-it-yourself biology there are growing and justified worries about asymmetric warfare, with small groups or even well-educated individuals able to obtain off-the-shelf assets for malign purposes, perhaps someday including neurobiological materials.

Despite these scare scenarios, we need to keep our perspective. Senior national-security experts I have spoken with over the years, both American and European, agree that the U.S. military has a history of overestimating the extent to which boots on the ground can be traded for technology. We have barely confronted the question of how much technology individual warfighters can be asked to adopt, or the long-term effect of devices or other interventions that target the brain. Experience with technologies that emerged with past conflicts should teach us that it is hazardous to guess the ultimate consequences of innovation. Consider the most important technology to emerge from the Cold War: the Internet. A relatively simple idea called packet switching has truly changed the world in ways that the founders of the “Arpanet” could not have anticipated. Neither could we.

To be sure, the effects of our emerging knowledge about the brain will be profound. In my forthcoming book The Body Politic: The Battle Over Science in America I argue that America’s battle over the ethics of issues like stem cells and cloning might be distracting us from the implications of the new neuroscience. Even if we guess wrong about what kinds of questions will arise in the new era of neurosecurity, we can be assured that new conflict technologies will, as they always have, change society as much or more than they change the ways wars are fought.