Future Tense

Why America’s Current Approach to Cybersecurity Is So Dangerous

It treats users like they are the problem, when they should be part of the solution.

Pinkypills/Thinkstock

It’s not magic: As with health or safety education, we need to start with basic steps and repeatable behaviors—like hand-washing or looking both ways before crossing.

Pinkypills/Thinkstock

It’s almost impossible these days to avoid media coverage of Russia’s role in hacking the 2016 election. So it was in 2015, when news broke that Chinese hackers had breached the U.S. Office of Personnel Management. Likewise for big cyberattacks the year in 2014 (Sony Pictures, Home Depot) and the year before that (Target). For the public, it’s usually these kinds of incidents that come to mind when they hear the term “cybersecurity.” They are complex and costly, and cast doubt on the trustworthiness of our major institutions—from government to banks to the electric grid.

Yet multiple surveys show that Americans tend to ignore even the most basic security measures with their own digital devices.

How to account for our public interest but our personal … well … meh? We should be concerned that, as a society, our minds go mushy when it comes to “digital literacy,” “information security,” “online safety,” or whichever name we choose. In fact, that mushiness is a major reason why America’s current approach to cybersecurity is so dangerous. We’re ignoring the behaviors of the overwhelming majority of actual users, and therefore leaving the largest attack surface undefended.

Go behind the headlines of the latest megahack, and what you’ll find is a growing public-safety and national-security crisis. We collect little useful data on it, and much of the bad stuff—networks ransomed for money, reputations harmed, secrets stolen—goes unreported. We are barely discussing how to help people help themselves in the digital world, let alone do their part in protecting our major networks and critical infrastructure. To the extent we are all part of the contest in cyberspace, we’re essentially deploying our troops without armor, our submarines without sonar.

I’ve sat slack-jawed through presentations by smart technologists who truly believed that the public simply had to wait a bit longer for security to come their way. In the meantime, multiple awareness campaigns offer a never-ending stream of tips to an imaginary population of American consumers with above-average, English-language literacy and digital skills. America is many things, but we are hardly that. The result is that we are, by and large, left to fend for ourselves in a tricky and unforgiving environment populated by skilled, patient adversaries. It’s not even close to a fair fight.

Until we embrace a vision of public cybersecurity that sees people, at all ranges of skill, as essential to our collective security, there will be no widespread cybersecurity. We assume consumers aren’t willing to pay for or care about security, and so instead of thinking systemically about how to change that, we double down on technological solutions. This, however, invites a lot more self-inflicted pain, with real consequences for both our social and economic health, and our homeland and national security as well.

Right now, America’s collective cybersecurity effort is headed toward near-certain failure for reasons within our own control. In less than a decade—thanks to the influx of dollars and high-level policy and press attention—cybersecurity has transformed what is actually a “people problem with a technology component” into its exact opposite. It’s not too late to change course. But that first requires rejecting the fallacy that individuals can, or should, simply wait around to be passive recipients of cybersecurity.

There is little real doubt that cybersecurity poses one of the biggest and most complex challenges we face. Breach after breach has spotlighted the frightening vulnerability of our nation’s networks. Now, we’re playing catch-up to close security gaps and harden defenses against the bad guys. And bad they are: Losses from cybertheft, espionage, disruption, and destruction are now spoken of in the trillions of dollars. The global market for cybersecurity, which was only $3.5 billion in 2004, will this year reach $170 billion, or the size of the entire economy of New Zealand. This year, the federal government alone will spend $18 billion on cybersecurity. And none of that counts the human toll from cyberenabled crimes like child pornography and human and sex trafficking.

But when we shift from talking about the problem of cybersecurity to the solution, it’s clear we’ve drifted dangerously off a sensible course. Official Washington and Silicon Valley have adopted a set of faulty assumptions about cybersecurity and internalized them to such a degree it’s practically a new religion, somewhere between late–19th century technological determinism and medieval alchemy. The core tenet of this mistaken faith? It fits on a single tablet: that cybersecurity will magically emerge once the right mixture of technology, regulation, and market incentives are poured into the cauldron. Ordinary people need not apply.

I’ve spent more than a decade thinking about how to communicate with the public about the threats, as well as the opportunities, of the online world. In my view there are two big reasons for our half-hearted approach toward ordinary users. The first we can describe as structural and flows from the highly decentralized nature of information-technology use in our society. Security gaps, and the damage that results, are widely dispersed. We don’t have mature civilian, civic, and noncommercial institutions to help Americans get the best out of networked life, while mitigating the dangers for the largest number of people. There’s no “cyber–CDC.” And while government tries to figure out its own proper roles, most cyber threat information, and much of the funding for cybersecurity education campaigns, comes from technology companies. Authority for regulation and law enforcement, where it exists, is patchy and incoherent. For example, nearly every state still sets its own rules for notifying victims of data breaches. And on and on.

The second, perceptual reason comes from the highly corporatized nature of digital life. The tech industry has become such a colossus it has achieved a strange, self-interested triple role: as producer of flawed products and services, town crier about the gravity of these same vulnerabilities, and confidence man peddling the solution to the problem. Recently, Sir Ian Levy—chief technologist for U.K.’s eavesdropping agency the Government Communications Headquarters (GCHQ)—criticized large network security firms for hyping the threat in apocalyptic terms and essentially arguing that only their “witchcraft” could address the problem. Far from downplaying the risks from cyberspace, Levy should be read as a tonic, a call to step back from the commercial hype and cyber jargon. Take a deep breath. And then take practical steps to mitigate our most likely risks—in essence, to lessen our attack surface.

In recent years we’ve made great progress reaching the public with useful public health and safety information. But we’ve barely begun the conversation about the new threats from cyberspace. We sorely need to get better at being digital, because whether we like it, or even know it, we all live within a growing constellation of internet-connected sensors, devices, and networks that are constant targets of online probing, theft, ransom, espionage, and even destruction. We need to get better to increase our herd immunity against botnets. We need to see that cybersecurity—like all aspects of safety, security, and resilience—is a shared responsibility. Better devices and apps won’t save us, since there are myriad other ways that individuals—even highly trained ones—become the weak link allowing bad guys to access personal, corporate, and government information assets. And almost all efforts at online safety, while well-meaning, are so poorly designed as to preclude knowing whether they work. It’s not magic: As with health or safety education, we need to start with basic steps and repeatable behaviors—like hand-washing or looking both ways before crossing. Public health and global development fields have all adopted behavioral science, metrics, random trials, and outcome evaluation to answer the all-important question: What works?

Noted security expert Bruce Schneier has—smartly and bravely—written of the need for additional forward-looking regulation to help correct the market failure that today provides little incentive for all but the biggest technology players to invest in security. But we shouldn’t take as a given that the American public must always be simply consumers waiting in the wings to be passive recipients of security. Rather, if we view the everyday user as a massive potential counterforce to the bad actors who profit from our rampant cyber-insecurity, we may finally provide the stronger signal the marketplace needs to truly take security seriously.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.