Future Tense

Malware in the Hospital

The biggest danger to medical devices is mundane—but manageable, says the Food and Drug Administration.

patient in hospital room.
One big cybersecurity risk to hospitals is more mundane than hackers: It’s malware.

Image by Thomas Northcut/Thinkstock

Security researcher Beau Woods will never forget the day he got a call from the natal intensive care unit. The fetal heart monitors kept rebooting, putting infant lives at risk. The Zotob worm was big that year, and the malware—designed to steal credit card details, but so poorly coded that it caused the devices it affected to repeatedly reboot—had infected many machines at the hospital where he worked. The physicians felt powerless to do anything. Could Woods help?

The Zotob authors were eventually apprehended and sent to prison. They clearly never intended their malware to hurt anyone, much less put premature babies in hospitals at risk. They were criminals looking to make a quick buck. But the incident highlights a critical point in the debate surrounding medical device cybersecurity.

“Who would be crazy enough to hack a hospital?” you may ask. People with bad intentions do exist, of course, and thanks to the Internet, every bad actor on the planet is now your next-door neighbor. Still, the greater cybersecurity risk to hospitals is more mundane—and easier to solve. It’s the Zotob worms of the world that we need to worry about.

Because medical device cybersecurity—or cybersafety, as some prefer to call it—is poor. I Am the Cavalry, a group of concerned security researchers, has called 2015 “year zero” of medical device cybersafety and noted that the health care sector was 15–25 years behind banking and retail when it comes to defending against online threats. Which, when you consider the number of major retail outlets that have gotten hacked recently—Home Depot and Target come to mind—is not terribly reassuring. Nor was the cyber bank heist in 2015 that Kaspersky Lab estimated at close to $1 billion.

To prove the point, last year researchers Scott Erven and Mark Collao ran honeypots—servers pretending to be medical devices, including an MRI and a defibrillator. The result? Malware like the Zotob worm infected the devices hundreds of times.

This kind of malware spreads across the Internet by probing Internet-connected devices for weaknesses. But since many medical devices run the same operating systems as consumer devices and corporate servers, automated hacking tools, like the Zotob worm, can’t tell the difference between a juicy target full of credit card details and a life-saving medical device attached to the Internet.

The key takeaway, the researchers emphasized, was that no one was deliberately targeting medical devices, so far as they could tell. “Malicious intent is not a prerequisite to patient safety issues,” Erven told the audience at the security conference DerbyCon, where he presented the findings. Rather, the accumulated weight of opportunistic malware could overwhelm a medical device and cause patient harm. Just like the Zotob worm.

The Food and Drug Administration agrees. “Network-connected/configured medical devices that are infected by malware can disable a device from properly performing its clinical function. This, in turn, could lead to a patient safety concern,” Suzanne Schwartz, director of emergency preparedness/operations and medical countermeasures in the FDA’s Center for Devices and Radiological Health, wrote in an email statement.

Malware could affect medical devices in a few different ways. Woods, who now works with I Am the Cavalry to push for better medical device cybersafety, says that in some cases, it might not interfere with the course of treatment. Malware could also make a medical device unusable—an unintended denial of service attack, like what happened with the fetal heart monitors infected with Zotob.

But the most insidious case would be a silent failure, in which the device remains usable but is not doing what the physician is telling it to do.

“In most cases physicians can fall back to nonconnected medical device operation,” Woods said. “But if the device has essentially failed silently, it could potentially cause harm to patients without it being disclosed.”

The problem of software injuring patients by accident is not a new one. In the Therac-25 incident of the 1980s, at least a half-dozen cancer patients in the United States and Canada received massive overdoses of radiation as a result of a programming flaw. Designed to deliver radiation therapy to cancer patients, the Therac-25 was a state-of-the-art medical device running the most modern software available at the time. That same software, though, led to the deaths of several patients.

But the Therac-25 was not connected to the Internet, and only 11 of the devices were ever deployed. A handful of incidents over several years gave investigators time to analyze the data and discover the problem. But once you connect such devices to each other, and to the Internet, there can be unintended consequences. And those consequences, like everything on the Internet, scale.

“The problem now is no longer actuarial but epidemiologic,” Woods said. “You have a patient zero or common cause that spreads through the entire exposed population.”

Given the low baseline security level of most medical devices—many continue to run Windows XP, or unpatched versions of Linux—this leaves entire hospitals vulnerable to a plague of malware that could disrupt, injure, and even kill, all without any intent on the part of the malware author.

In December, medical device security researcher Kevin Fu told Healthcare IT News that a hospital in Boston ran more than 600 unpatched Windows XP boxes. Last June, security researcher Jeremy Richards reported a severe security vulnerability in a drug pump to the Department of Homeland Security, calling it “the least secure IP-enabled device I’ve ever touched in my life.” Richards’ report resulted in the FDA warning hospitals to stop using the drug pump in question.

The good news is there are solutions. The FDA is working hard to raise awareness of the issue. “Vigilant cyberhygiene is a critical step in addressing these more common and more likely scenarios,” the FDA’s Schwartz told me.

Cyberhygiene protects patients from the threat of malware infection the same way hand-washing and other sanitary measures protect patients from microbial infection.

According to Erven, there are three main security hygiene issues across the board, not specific to any manufacturer.

“One, use of known weak default or hard-coded administrative credentials,” he said. Many medical devices ship with administrative passwords that cannot be changed by the hospital or end user and which are easily discoverable by downloading the device manual from the manufacturer’s website.

“Two, legacy systems and inability to apply updates or patches.” Apple and Microsoft push security patches to user devices on a monthly basis. This is not the case for medical devices, however. Many mission-critical medical devices are running years-old, unpatched operating systems that remain vulnerable to now-ancient malware. In fact, confusion over the legality of shipping security patches has led the FDA to repeatedly trumpet over the last year that security patches alone do not require FDA recertification of the device—a common reason medical device manufacturers have given to avoid dealing with post-market security hygiene.

“Three, lack of encryption being utilized in the medical device and the supporting systems and applications.” Encryption, Erven said, ensures not only the confidentiality and privacy of patient data but also its integrity. Data corruption could cause a silent failure that resulted in a physician prescribing the wrong treatment—or even an unintended treatment being delivered.

Unfortunately, the big-picture problem of insecure medical devices will take a decade or more to solve. That’s because the time to market for most medical devices is usually 5–10 years, sometimes longer. Erven told the DerbyCon audience about a new pacemaker approved under an expedited process that took 12 years. That means a securely designed medical device submitted to the FDA for approval today will not see the inside of a hospital (or the inside of a patient) until the 2020s.

And that’s assuming medical device manufacturers decide right now to make cybersafety a priority, built in by design, and not “bolted on” after the fact. One indication of a medical device manufacturer’s appreciation of the cybersafety problem is whether it has published a coordinated vulnerability disclosure policy, which serves as a welcome mat to independent researchers who wish to improve device security.

The FDA is pushing hard for all manufacturers to publish such a policy, but so far only two—Philips and Dräger—have done so, out of the many hundreds of medical device manufacturers in America today.

Even though medical device cybersecurity is well behind other sectors, such as financial or retail, one thing becomes clear from talking to researchers: Networked medical devices are worth the risk. Automating hospital systems relieves medical staff of rudimentary duties, like delivering medicine intravenously to patients, and can prevent human error, a common cause of injury in hospital.

Maintaining public trust in medical devices may be the biggest challenge. Hyped-up fears of malware or hackers could undermine public trust in the health care system and cause patients who need treatment to refuse it because of an uninformed risk/benefit analysis.

To ensure that public trust, security researcher Marie Moe argues that source code transparency is essential. Most medical devices run proprietary software that manufacturers consider part of their intellectual property. But that means patients must trust the software to do the right thing—and in Moe’s case, she has reason to worry.

Diagnosed with a heart condition in her early 30s, Moe was implanted with a pacemaker in 2011. Without the device, she would not be alive today. But not long after receiving the implant, a software bug—not a hacker or even malware—nearly killed her.

“I felt like I was going to die, I couldn’t breathe,” she said. “I didn’t know what had happened but something was wrong.”

It took doctors three months to find the problem. Turned out there was a bug in the graphical user interface doctors use that showed the wrong values, so the pacemaker settings weren’t the same as what was appearing on screen.

“I would really like to have more transparency and more standardized open solutions for how these devices communicate,” Moe said. “That, for me, as a security professional, as a researcher, is important. I want to know how my device is working, so I can trust it.”

As for Woods and his battle with Zotob? He managed to patch the fetal heart monitors, protecting them from reinfection by Zotob. But, like hand-washing in hospitals, cyber hygiene is not a one-off, rather a process of regular care and maintenance. Only by working together—medical device manufacturers, hospitals, and regulators—can we ensure our hospitals remain safe from future Zotobs.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.