Cyberwarfare: what Richard Clarke and other fearmongers get wrong.

What Fearmongers Get Wrong About Cyberwarfare

What Fearmongers Get Wrong About Cyberwarfare

The citizen’s guide to the future.
May 28 2012 8:00 AM

What Fearmongers Get Wrong About Cyberwarfare

Cyberweapons aren’t easy or cheap to procure—and they could even promote peace.

(Continued from Page 1)

Terrorists may be more keen on anonymity, but the reality is that in the decade since 9/11, no terrorist group has had much success causing serious disruption of the civilian or military infrastructure. For a group like al-Qaida, the costs of getting it right are too high, particularly because it's not guaranteed that such a cyber-terror campaign would be as spectacular as detonating a bomb in a busy public square.

In addition to countering the recent moral panic about the threat of cyberwarfare, Liff tells a broader story about the dangers of assuming that technologies (including weapons) have essential and inalienable properties that would have the same coherent—and yet revolutionary—effect wherever they were used. Liff doesn't believe cyberwafare to be revolutionary—and he adroitly argues that the net effect of cyberwarfare on the likelihood of conflict depends on the nature of the actors involved, their relative bargaining strength, and how much credible information they have about each other. Notes Liff,

In most cases [cyberwarfare] is unlikely to significantly increase the expected utility of war between actors that would otherwise not fight. Furthermore, a cyberwarfare capability may paradoxically be most useful as a deterrent against conventionally superior adversaries in certain circumstances, thus reducing the likelihood of war.


Liff points out that earlier generations of military analysts were as quick to proclaim that strategic bombing and the atomic bomb were “absolute weapons” that were bound to revolutionize military strategy. It's undeniable that both air power and the atomic bomb have had a profound effect on the nature of military conflict; however, their inherent logic (e.g. the idea that aerial warfare admits no defense, only offense) has been greatly mitigated by the political, social, and economic constraints and considerations of the actors that possess them. Air power has not always neatly translated into political power.

The useful lesson here is that teleological accounts of technological change rarely offer sharp analytical insights. All too often they result in confused thinking and poor policy. Yet such teleological thinking about technology still rules the day. Just as it's fashionable to think that cyberwarfare is inherently bad for international security and world peace, it's equally fashionable to think that social media are inherently bad for dictators or that online filters are inherently bad for serendipity and public debate. The real world, of course, is never that pliable and neat. It eschews such half-baked teleological theorizing and makes technologies take on the roles and functions that no one expects them to take.

Whatever inherent logic cyberweapons, social media, or online filters might possess, such logic inevitably mutates once these tools find their way into whatever political, social, or cultural regime guides their use in practice. This is how cyberweapons end up promoting peace, social media end up strengthening totalitarianism, and online filters end up improving information discovery. We may not always be able to predict such effects in advance, but the longer we stick to teleological explanations, the lower are the odds we will ever develop better frames for technological analysis and decision-making.

This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.