Future Tense

Why Don’t Tech Reviews Discuss Gadget Security and Privacy?

It’s hard to do. But someone has to.

tech review.
We need a reviewer who tests gadgets to find out which are safest.

IPGGutenbergUKLtd/Thinkstock

Earlier this month, technology researchers revealed that wearable “fitness trackers”—including the popular Fitbit—could “expose their wearers to long-term tracking of their location,” and several, such as the Garmin Connect, could be hacked outright. The news came as no surprise, given the tech industry’s casual practices when it comes to privacy and security.

But the report from Toronto-based Open Effect and Citizen Lab had a bright side. It was an example of something we need to see routinely: solid testing, by experts, of the digital consumer products that are becoming so ubiquitous in our lives—testing not to see how well they work but how secure they are.

We love tech for the convenience it offers. But as we keep learning, sometimes the hard way, the digital things we use every day—our mobile phones and apps, the Wi-Fi routers we use to create home networks, “smart” objects, Web services, and so much more—are grossly insecure.

This state of affairs makes me furious. I hope you feel the same way. And I hope you’ll agree that we need much better help than we’ve been getting when it comes to knowing what hardware and software we can trust.

There’s been some excellent journalism warning of trouble in a global way, but product reviews rarely pay much attention to safety and security. This is a failure of journalism, in part, and I have to suspect that the tech press’s dependence on advertising is playing a role here. But in fairness, it’s technically difficult to investigate code-based products.

The government has been loath to regulate technology, which is mostly a good thing. But when it emerged, decades ago, that car manufacturers were putting safety way down their list of things that mattered, the government stepped in. Today, manufacturers aren’t allowed to knowingly sell unsafe cars, among a number of consumer products likewise held to reasonable safety standards.

And at least one major publication—Consumer Reports—does a good job of looking at safety issues in a variety of product categories, notably cars.

Which is why we need a Consumer Reports, of sorts, for technology: a hard-hitting, thoroughly researched analysis of tech gear/software, not just from the perspective of whether it works, but more so about whether it’s safe and secure. One-off studies here and there, however valuable, just don’t cut it.

Consumer Reports (I’m a subscriber) is a nonprofit with that takes no advertising. It’s gone through a rough patch in recent years, in part due to competition from free-to-read websites such as the great Wirecutter. But as this excellent Vox profile of the organization and its challenges makes clear, it is unique in its approach to protecting buyers of consumer products and services—and it recognizes the need to update itself.

Laudably, the magazine and its policy-shop sister, Consumers Union, have been loud advocates for online privacy. Now they should directly help us protect our privacy and security from the routine predations we face from criminally sloppy hardware companies and the surveillance-industrial complex (including government spies, not just corporate ones).

How? CR could apply its vaunted testing rigor, and buying recommendations, to the tech-infused products and services on which we’re all growing so dependent.

Reviews of gadgets and apps should be based, in part, on whether they invade privacy or otherwise have poor security. For instance, what “permissions” do mobile apps insist on? (The usual answer: way too many that have nothing to do with functionality but everything to do with collecting data.) What is a company’s previous record, to the extent that this is known, on flaws and patches? What do the terms of service give the company permission to do with data? Is there robust encryption? What known vulnerabilities does it ship with? How easily can it be updated with security fixes? And so on.

CR sometimes advises against buying products based on questionable safety. My guess is that many modern products, especially mobile ones, would get a “don’t use” rating if the organization applied this standard in the digital space.

More to the point, given the ubiquitous insecurity of so much of what we use today, CR could tell us what are the safest—or as safe as possible—products and services among the contenders. I’m betting, likewise, that this would be a too-short list.

I recently bought a new Wi-Fi router for our home. I did as much research as possible, but in the end I couldn’t conclusively figure out what off-the-shelf router was likely to be the safest—that is, which one had the best security and, when bugs were found, offered the easiest updating procedures. I got some help from a recent report on the overall crappiness of router security in the Wall Street Journal, which even hired a testing lab to look at a bunch of popular routers. An accompanying chart showed that all the tested routers had issues that could compromise security, with some having multiple security flaws. It was an impressive effort, but it didn’t contain the thing I wanted most: a sentence telling me what router would be safe. Maybe such a thing doesn’t exist.

This feels like a market opportunity on several levels. I’ll gladly pay more, and I’m betting a lot of others would, too, for products and services that put security and privacy at the top of the features list.

Consumer Reports “is actively investigating ways to incorporate (product) security testing” into its methodology, a CR representative told me before emailing an even more vague statement from the organization. I was unable to get any details on what that might mean.

At least one new project is promising: Peiter Zatko, a super-respected security researcher better known as Mudge, is creating a White House intiative that will take an “Underwriters Lab” approach to software security. UL—which does testing, certification, and more, and is more than a century old—has created standards for performance and safety for a variety of products, everything from smoke alarms to lamps to industrial-control equipment. UL standards are essentially baselines. If a product meets those standards, it has achieved a certain level of safety, not absolute safety. It’s unclear how this will work for software, which is infinitely malleable.

Which brings up the most difficult issue for anyone, whether it’s CR or a university research lab or other entity, wanting to do this work. It’s hard, and poses new challenges. Ross Anderson, another well-known researcher, notes the Satan-vs.-Murphy problem. Murphy’s law promises that anything that can go wrong will go wrong. Accidents due to poor engineering, such as auto defects, generally fall under the Murphy category. But Satan is malicious, and malicious hacking is frighteningly common with software, in part because hackers often piggyback on mistakes in the original code. Dealing with Murphy is hard. Satan is worse. (One reason we should be so worried by the Internet of Things is that much of what we use every day in the physical world will contain reprogrammable software, wide open to malicious hacking by everyone from script kiddies to criminals to governments—and is getting connected to the rest of our lives.)

Could Consumer Reports pull this off? It wouldn’t have to test absolutely everything, but it would likely need to raise significant new funding, or else divert resources from its bread-and-butter (and valuable) mission of testing household products and cars.

But the products it’s been testing all these years are adding digital capabilities to analog things. Cars have become computer networks on wheels—with some odd security practices so far. If CR wants to keep rating cars and other modern devices for safety in the future, it will have to learn to address at least some software security issues.

Whether it’s Consumer Reports or others who do it, we need a service of this kind. When so many companies sell so much unsafe hardware and software, we need to know which ones we can trust more—or at least which ones we should mistrust less.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.