Future Tense

Speed Kills

One way to improve cybersecurity: Make users slow down.

538561431
Slower thinking could help address many of the stickiest cybersecurity problems.

Olcayduzgun/Thinkstock.

On Sept. 26, 1983, a Soviet military officer by the name of Stanislav Petrov avoided ending the world by taking a few extra seconds to think. The computer screen he sat before ordered him to launch a retaliatory nuclear attack on the United States after the Soviet missile detection system picked up five missiles heading their way. The system reported he had 20 minutes before total annihilation so every second of hesitation mattered. Had Petrov received the same noncivilian training as his comrades in arms, he may have simply reacted to the order, initiating the nuclear holocaust. Instead, he correctly judged that if the United States had actually fired their missiles, they would have likely sent many more than five. Petrov decided to ignore the computer system, and in doing so, avoided World War III. Since the world learned about the incident, it’s become a famous example of the importance of slowing down to think in the face of a critical decision.

While the stakes may not (yet) be nearly so great, many of the major cyber-attacks over the past few years—including the breaches of Dyn, Equifax, and the Democratic National Committee— started because people acted reactively instead of slowing down to think like Petrov did. It’s true that automation, machine learning, and artificial intelligence increasingly may make human interference in online security irrelevant, effectively offloading security decisions from humans to computers. However, while these computer algorithms have greatly improved, much like the old Soviet missile defense system, they’re far from failproof. That’s why if we want to reduce the incidence of cyberattacks, we must figure out how to get the humans in the driver’s seat to think and act more like Petrov.

Slower thinking could help address many of the stickiest cybersecurity problems, like failing to update computer software, creating weak passwords, succumbing to phishing attacks, and clicking on bad links. But a tension arises when we try to make our devices both safe and lightning-quick. Imagine if every email you opened, or link you clicked, was preceded by some form of “Are you sure?” or “Have you thought about this?” You’d soon switch use a less cumbersome operating system, email client, or web browser. When people engage with fast technology they operate reactively. How many times have you mindlessly recycled an old password when signing up for a new website? When you make quick decisions, though, you’re prone to missing red flags that might have been obvious given more time—and by “more time,” we’re talking seconds.

It’s hard to point to a technical glitch that creates this vulnerability. The psychological drivers, on the other hand, are right in front of us. The two mechanisms that help humans make decisions and solve problems—“System 1” is for quick, automatic choices and “System 2” is for deliberative thinking—foster efficiency in our daily lives.  Automatic thinking saves us time on repetitive tasks, and calculated thinking enables us to solve more complex problems we haven’t encountered before. But there are times when System 1 hastily produces a faulty conclusion (that a certain website looks safe, perhaps), and System 2 doesn’t kick in quickly enough to correct it. That’s why when researchers intentionally slow down people’s decision-making, allowing them to use System 2, they make better choices and pay greater attention to the surrounding context.

Unfortunately, the software and hardware we increasingly rely on isn’t designed with human psychology in mind. As important as security is, technological advancements tend to promote speed — faster processors, higher upload and download speeds, more reactive user experiences. This foists users into a setting that triggers the reactive and automatic thinking of System 1 as opposed to the deliberative decisions and actions of System 2.

How, then, should we think about designing user experiences for hardware and software systems? One insight that came out of research at Ideas42, the design firm where I work, is simply to slow the user down. But we have to do this thoughtfully because we can’t slow people down all the time. Imagine if people tapped System 2 for tasks like driving home from work or answering the phone. It would take too long and unnecessarily deplete finite mental energy. We can, however, create a scaffolded interface, limiting the technological errors people make by slowing them down at the right moment and facilitating better decision-making.

Take updates, for instance. Security professionals emphasize the importance of applying software patches promptly. Yet, despite the ease and importance of updating, many people procrastinate on this critical step. Why? Part of the problem is that update prompts may come when the user is preoccupied with something else , and they often provide easy outs with various “remind me later” options. How many times have you clicked “remind me tomorrow” before finally clicking “update now”? A more effective intervention might require users to specify what time they wish to update and then prompt them to commit to it, making it a choice they can follow through on rather than delay indefinitely.

Another major problem in cybersecurity is ignored browser warnings—those landing pages that gate website access when site traffic isn’t encrypted, or when malware has been detected. People often swiftly dismiss those warnings, largely due to habituation: The more we encounter something, the less we respond to it in the future. Because people are focused on getting to a website or continuing their work, they see the warning as a mere impediment to their ultimate goal. And it’s an impediment that’s often safe to ignore. In this case, we’re slowing people down too often—about 81 percent of online alerts are false alarms, making it much less likely that people will take them seriously when there is in fact a major risk. Another problem is that they all look the same, meaning users can become inured to alerts they have never even encountered before.

And that’s dangerous. Designers of pieces of software and hardware have a responsibility is to help users make decisions. Slowing users down at the right time is one way to realign the online environment with human behavior.

We’ve been fortunate that the sorts of cyberattacks that have meaningfully disrupted people’s lives still pale in comparison to the risks encountered in a nuclear standoff. However, we’ve already begun to see how human error can enable intrusions into critical infrastructure such as power plants, hospitals, and election systems. It is only a matter of time before the stakes are raised and a major hack could put human lives at risk. While government and industry is working tirelessly to develop better machine learning and artificial intelligence tools to reduce the likelihood that human error will play a significant role in future attacks, we still have a long way to go. In the meantime, it’s our responsibility as the designers of hardware and software systems to help people exhibit better judgment—to be more like Petrov and less like their dumb, uncreative computers.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.