Should we worry about cyberwarfare? Judging by excessively dramatic headlines in the media, very much so. Cyberwarfare, the argument goes, might make wars easier to start and thus more likely.
Why so? First, cyberwarfare is asymmetric; being cheap and destructive, it may nudge weaker states to conflicts with stronger states—the kinds of conflicts that would have been avoided in the past. Second, since cyberattacks are notoriously difficult to trace, actors may not fear swift retaliation and behave more aggressively than usual. Third, as it's hard to defend against cyberattacks, most rational states would prefer to attack first. Finally, since cyberweapons are surrounded by secrecy and uncertainty, arms control agreements are hard to implement. More cyberwarfare, in other words, means more wars.
Not so fast, cautions a new and extremely provocative article by Princeton doctoral candidate Adam Liff in the Journal of Strategic Studies. According to Liff, to assume that cyberwarfare has an inherent logic—a teleology—that would always result in more conflict is short-sighted. Furthermore, it fails to consider the subtleties of both military strategy and power relations. Instead of basing our cyber policy on outlandish scenarios from second-rate films, we have to remember that those who would deploy cyberweapons have real agendas and real interests—and would have to pay real costs if something goes awry.
Given today's geopolitical situation, Liff sees no reason for the doom-and-gloom fearmongering of leading ambassadors of the cyber-industrial complex, most notoriously Richard Clarke and his best-selling 2010 book Cyberwar. Liff even spells out several scenarios where cyberwarfare would actually decrease armed conflict. That's right: The advent of cyberweapons may eventually promote world peace. Hippies of the world unite—and learn how to mount cyberattacks!
Cyberwarfare may seem asymmetrical, but it's a myth that advanced cyberweapons are cheap and easily available. Developing them requires a lot of resources, time, and operational secrecy. Weak actors are not really capable of mounting protracted attacks that could cripple the infrastructure of well-defended systems.
But even if they were, they would probably choose not to engage in cyberwarfare. Offensive cyberattacks by weaker states make sense only if they can back up their digital might with conventional weapons. Otherwise, they might get wiped out by the conventional military response of the stronger state. This explains why Somalia or Tajikistan is not likely to wage cyberwarfare against the United States anytime soon; whatever damage they might cause through cyberattacks would be quickly responded to with conventional weapons.
Nor would states engaged in cyberwarfare necessarily know about the actual consequences of their own cyberattacks. Even advanced actors like the United States may have no idea about the probability of success of such attacks. The risk of self-inflicted damage is high, and cyberattacks might inadvertently push some otherwise lucrative assets (like an enemy's banking infrastructure) off the table. Such uncertainty may be the best deterrent of all.
As Liff points out, it's facile to think that rational actors would prefer to exploit one another’s cyber-vulnerabilities and engage in a costly cyberwar if they can find other, cheaper ways of settling their conflict. Here the availability of cyberweapons, whatever their actual destructive potential, might actually allow weaker states to get better bargains from their stronger adversaries, perhaps even avoiding conflict.
Likewise, we shouldn't forget that wars are primarily about coercion—and it's hard to coerce other actors without claiming responsibility for the damage caused to their property. Yes, cyberattacks may be hard to trace—but any government that uses them in expectation of getting other governments to act in accordance with its wishes would want to claim such attacks as its own. (The reason why Russia didn't claim responsibility for the cyberattacks in Estonia in 2007 and Georgia in 2008 is because those attacks were mostly inconsequential; an act of mere hacktivism in the former case and a sideshow to the kinetic war on the ground in the latter.)
Terrorists may be more keen on anonymity, but the reality is that in the decade since 9/11, no terrorist group has had much success causing serious disruption of the civilian or military infrastructure. For a group like al-Qaida, the costs of getting it right are too high, particularly because it's not guaranteed that such a cyber-terror campaign would be as spectacular as detonating a bomb in a busy public square.
In addition to countering the recent moral panic about the threat of cyberwarfare, Liff tells a broader story about the dangers of assuming that technologies (including weapons) have essential and inalienable properties that would have the same coherent—and yet revolutionary—effect wherever they were used. Liff doesn't believe cyberwafare to be revolutionary—and he adroitly argues that the net effect of cyberwarfare on the likelihood of conflict depends on the nature of the actors involved, their relative bargaining strength, and how much credible information they have about each other. Notes Liff,
In most cases [cyberwarfare] is unlikely to significantly increase the expected utility of war between actors that would otherwise not fight. Furthermore, a cyberwarfare capability may paradoxically be most useful as a deterrent against conventionally superior adversaries in certain circumstances, thus reducing the likelihood of war.
Liff points out that earlier generations of military analysts were as quick to proclaim that strategic bombing and the atomic bomb were “absolute weapons” that were bound to revolutionize military strategy. It's undeniable that both air power and the atomic bomb have had a profound effect on the nature of military conflict; however, their inherent logic (e.g. the idea that aerial warfare admits no defense, only offense) has been greatly mitigated by the political, social, and economic constraints and considerations of the actors that possess them. Air power has not always neatly translated into political power.
The useful lesson here is that teleological accounts of technological change rarely offer sharp analytical insights. All too often they result in confused thinking and poor policy. Yet such teleological thinking about technology still rules the day. Just as it's fashionable to think that cyberwarfare is inherently bad for international security and world peace, it's equally fashionable to think that social media are inherently bad for dictators or that online filters are inherently bad for serendipity and public debate. The real world, of course, is never that pliable and neat. It eschews such half-baked teleological theorizing and makes technologies take on the roles and functions that no one expects them to take.
Whatever inherent logic cyberweapons, social media, or online filters might possess, such logic inevitably mutates once these tools find their way into whatever political, social, or cultural regime guides their use in practice. This is how cyberweapons end up promoting peace, social media end up strengthening totalitarianism, and online filters end up improving information discovery. We may not always be able to predict such effects in advance, but the longer we stick to teleological explanations, the lower are the odds we will ever develop better frames for technological analysis and decision-making.
This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.