The malware attacking the NHS could've been stopped. Here's why it wasn't.

The Malware Attacking the U.K.’s National Health Service Could’ve Been Stopped. Here’s Why It Wasn’t.

The Malware Attacking the U.K.’s National Health Service Could’ve Been Stopped. Here’s Why It Wasn’t.

Future Tense
The Citizen's Guide to the Future
May 12 2017 4:20 PM

The Malware Attacking the U.K.’s National Health Service Could’ve Been Stopped. Here’s Why It Wasn’t.

598040056-new-signage-stands-outside-county-hospital-formerly
An easy target.

Photo by Christopher Furlong/Getty Images

The ransomware attacks spreading across the computer systems of the British National Health Service this week are a stark reminder of the shocking state of software-updating practices in even the most critical infrastructure systems across the world. The attacks involve the ransomware strain WannaCryptor, which encrypts the contents of infected computers until the victims make a Bitcoin payment of roughly $300. WannaCryptor takes advantage of vulnerabilities in the Windows operating system that were patched in March by Microsoft, after a group called the Shadow Brokers leaked similar tools, allegedly stolen from the NSA.

The NHS had two months to install this patch and inoculate itself from WannaCryptor—but it didn’t. In fact, many systems remain vulnerable. It would be bad enough if a wave of hospitals were under attack because a brilliant, determined adversary had identified new, never-before-exploited vulnerabilities in their computer systems. But to be suffering these sorts of crippling attacks at the hands of an adversary who is merely recycling old malware, which could have been stopped using existing patches, is downright shameful.

Advertisement

This is an old story. Computer security workers have been complaining about the people and organizations who don’t download security patches promptly for pretty much as long as there have been software patches. If you’ve ever dismissed a warning from your operating system urging you to download a critical update, you’re part of the problem. But then, you’re probably not making that decision on behalf of an entire hospital—much less, an entire nation’s health service.

And yet, those software patching decisions that are so much more crucially important in the context of health care and other critical infrastructure systems are, at the same time, much more difficult to execute. Ironically enough, this is partly because the health care industry has historically been subject to much more stringent data security and privacy regulations and standards than other sectors. In the United States, for instance, medical information is subject to the requirements laid out in the Health Insurance Portability and Accountability Act of 1996. That means that every new system or piece of software purchased by a hospital or health care provider in the U.S. needs to be approved as being HIPAA-compliant.

This is probably a good idea, at least in theory. It makes sense to have some checks and security standards for health care-related computer systems and software. But it also means that updating systems—switching to a newer version of an operating system, for instance—can be a major challenge for health care organizations. A new operating system, or even an updated operating system, can often mean switching to new software programs and altering other components of the network. But at a hospital, every single one of those changes necessitates a slow, expensive compliance audit to ensure that none of the government’s data protection standards has been violated.

Instead of encouraging hospitals to make rapid changes and updates to their computing infrastructure, policy initiatives aimed at improving the security of health data have instead focused on trying to ensure that those decisions be carefully vetted and evaluated. It’s impossible to have it both ways: Either we can demand that hospitals do an in-depth sector-specific check of new systems and software before implementing anything, or we can expect them to download all important security patches within a matter of weeks. And it’s very difficult to know how health care providers should best strike a balance between these two goals.

At present, it’s very difficult for the health care industry to respond to threats even over the course of two months—and as this week’s news demonstrates all too clearly, that’s a problem with enormous associated risks. On the other hand, it could also be very risky to place too much pressure on hospitals to update systems and download new software too quickly before it could be thoroughly evaluated and vetted.

One of the other striking features of the spread of the WannaCryptor ransomware across NHS hospitals is the apparent lack of effective network partitioning or quarantining tools in the health care sector. That the malware is spreading so quickly among multiple hospitals suggests that the NHS is struggling to cut off the infected machines and had no serious contingency plan in place for how to deal with a malware infection in its centralized system.

In computer security, we often like to take metaphors, names, and lessons from the public health sector. Notions of quarantining computers, teaching users good computer hygiene, even computer viruses, all originate from the language and practices of medicine. It seems the lessons technologists have drawn from the health care world need to be conveyed back to the hospitals and health care providers where they originated.

Future Tense is a partnership of SlateNew America, and Arizona State University.

Josephine Wolff is an assistant professor of public policy and computing security at Rochester Institute of Technology and a faculty associate at the Harvard Berkman Center for Internet and Society. Follow her on Twitter.