Calling humans the weakest link in computer security is dangerous.

Calling Humans the “Weakest Link” in Computer Security Is Dangerous and Unhelpful

Calling Humans the “Weakest Link” in Computer Security Is Dangerous and Unhelpful

Future Tense
The Citizen's Guide to the Future
Jan. 22 2016 9:39 AM

Calling Humans the “Weakest Link” in Computer Security Is Dangerous and Unhelpful

Don't touch that computer, human—you can't be trusted.

Photo by benhammad/Thinkstock

The idea that humans are the “weakest link” in computer security is very popular among computer scientists and people who work on the technical elements of cybersecurity. After all, if your job is to secure computer systems, it’s satisfying to feel that the problems lie not in the computers but in everyone else. Of course, it’s completely true that many computer security incidents involve human users making bad decisions—opening emails or downloading files despite warning signs; using obvious, easily guessable passwords; ignoring warning signals from their browser or operating system. But that’s no reason for technologists to feel smug about their accomplishments. In fact, just the opposite: These sorts of mistakes are evidence that the technology is failing its human users, not the other way around.

As well trod as this territory is, there’s still something a little startling about the ability of technologists to broadly condemn human stupidity as the root of all cybersecurity problems. Take, for instance, the headline on a Wall Street Journal piece that ran earlier this week: “How to Improve Cybersecurity? Just Eliminate the Human Factor.”


Eliminate the humans! Why didn’t we think of that before? It’s an attitude strangely reminiscent of a certain type of hostile librarian who gives the impression that she would much prefer you not touch, or even breathe on, any of the precious books in her care. The whole point of computers—and libraries, for that matter—is that they’re supposed to improve the lives of people, and yet, strangely, it’s the people who end up being painted as the problem.

I’m more sympathetic to computer scientists (and even to librarians) than I’m probably conveying here. I can well imagine how frustrating it is to have devoted time and energy to designing something that is then recklessly and foolishly undermined by idiots who still think that 123456 is a great password. But when so many people falling prey to phishing scams or using weak credentials, it’s tough (and not terribly productive) to write them all off as idiots—or even to think of them, as Christopher Mims wrote in the Journal article, as “a critical, unpatchable weakness.”

Mims makes plenty of sensible points in his piece about the role of social engineering in computer security incidents and how susceptible most of us are to phishing attempts and also about how hard it is to educate people on computer security—a topic I’ve also grown increasingly demoralized about. And, in fairness to him, he probably didn’t have any say over that headline. The best parts of his article hint subtly toward encouraging better human-centered design for security, though it can be hard to tell given how dismissive the language is toward humans in general.

It’s hard to educate people about what SSL is and how it works. So a human-centered design approach for security would suggest that, say, we create technologies that make it easier for people to tell when they’re being deceived online and limit the resulting damage—for instance, tools that flag when they’re dealing with emails from people they haven’t interacted with before or that isolate newly downloaded programs from accessing the rest of a machine and test them for any ill effects. And this approach is not totally unrelated to the philosophy of “assume that humans will fail and automate around them” that Mims cites. The difference lies in whether you assume humans will fail or, instead, assume that their opinions and ideas and instincts should help you design the tools that they use.

These may seem like subtle distinctions—and in some ways they probably are. If we end up with better email filtering and security technologies that block more phishing emails from landing in recipients’ inboxes or monitor systems for malware—the technologies Mims specifically advocates for in his piece—does it really matter whether they’re developed by someone raging about the idiocy of humans? I’m not totally certain. I think a healthy cynicism about how easily people are deceived is probably a good thing for a security engineer. At the same time, I worry that an engineer who is focused on the need to “patch” human behavior or overly inclined to think of humans in the same terms as coding errors and technical glitches runs the risk of understanding—and respecting—human behavior too little to be able to effectively support it.

The final paragraph of Mims’ article straddles this uncomfortable divide between the mindset that computers should ultimately help humans and the view that humans are ultimately ruining perfectly good computer systems with their incompetence (and deserve to be eliminated!). He writes: “History has shown us we aren’t going to win this war by changing human behavior. But maybe we can build systems that are so locked down that humans lose the ability to make dumb mistakes. Until we gain the ability to upgrade the human brain, it’s the only way.”

The first sentence is dead on: There’s no point in building systems that cause problems and then demanding that everyone figure out how to use them better. On the other hand, “locking down” systems so that people can’t make “dumb mistakes” isn’t the right mindset for developing technical tools that make it harder for people to deceive each other or extract information from one another under false pretenses. At some point, computer security technology may reach the point where we can confidently blame breaches on the stupidity of the people who get hacked, but first we have to be sure that technology isn’t also tripping up reasonably bright, competent people—as it seems to still do. But we don’t get to that point by upgrading, or patching, the human brain, we get there by accepting that the onus is on the designers of technology to support—not bypass—people’s decisions and provide them with the right signals and nudges to make better ones.

Future Tense is a partnership of SlateNew America, and Arizona State University.

Josephine Wolff is an assistant professor of public policy and computing security at Rochester Institute of Technology and a faculty associate at the Harvard Berkman Center for Internet and Society. Follow her on Twitter.