Software security: Why don’t we have safety labels for the places we store our data?

We Have Warning Labels for Food, Drugs, and Toys. Why Not for Software?

We Have Warning Labels for Food, Drugs, and Toys. Why Not for Software?

Advice for keeping your data secure.
March 19 2015 8:09 PM

Should We Put Safety Labels on Software?

We all agree that personal data should be safer and more private. We just can’t agree on who’s responsible for that.

150319_FT_Ratings

Illustration by Lisa Larson-Walker

Last week the mobile payment app Venmo announced security improvements like notifications of account changes, more responsive customer service, and the promise of two-factor authentication. The changes stemmed from reports of fraud in which Venmo users discovered their accounts had been tampered with and their money had been stolen, yet the company had failed to inform them that anything was amiss. But perhaps blaming users for not realizing Venmo might be less secure than a traditional banking app is unfair. After all, their friends were on Venmo, too.

The idea behind Venmo is that money transfers should be as frictionless as possible, and that they should be social, too—that users will want their transactions to be visible in order to show off how glamorous or interesting their lives are. Venmo is easy to use, Venmo is fun, and, at least until last week, Venmo had some gaping security flaws. The episode illustrates perfectly the fundamental tension between usability and security. And it raises the question of why we haven’t devised a better way to understand the risks inherent in the digital services we use every day.

We can all agree we’re better off because the foods we buy carry nutrition facts. Shouldn’t we have something similar for digital products?

Advertisement

Most consumers don’t think about the tradeoffs between security and convenience that companies make when they design software. We’re understandably more focused on what products can do for us. The industry realizes this, which is why companies do sometimes commission evaluations of their security practices or turn to auditors, like the division of Norton that issues a “Secured Seal.” But these efforts have done little to enlighten consumers, nor has the industry agreed on the right approach. (And if tech companies think frank discussions about security are bad for business, then there’s no economic incentive to improve.)

The problem begins with how willing we are to hand over data. Security expert Bruce Schneier wrote in June 2013 that the Web has become a “feudal Internet” in which large companies, especially those that offer cloud or other hosting services, control your data. Data lives on networks that aren’t controlled by users, who are limited in their ability to configure the devices through which they access that data. And most people wouldn’t know how to go about making a device more secure even if they could.

“We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm,” Schneier wrote in the Harvard Business Review. “And if we pledge complete allegiance to them—if we let them control our email and calendar and address book and photos and everything—we get even more benefits. We become their vassals; or, on a bad day, their serfs.”

Most cybersecurity experts agree that users can’t be expected to study and understand security architecture when so many other things compete for their attention. Jeff Goldberg, a “Defender Against the Dark Arts” at AgileBits (the security company that makes the popular password manager 1Password), says he gets frustrated when companies and industry analysts try to put the onus for security on users. “I find that despicable,” he says. “Everybody deserves security. … Ideally we want what people do naturally and easily to be the secure behavior.”

Advertisement

Achieving this in practice is another story. Hillary Clinton said in a press conference last week that she “opted for convenience” when she chose to use a personal email account throughout her four years as secretary of state instead of a government one. Whether that was her true motivation, the decision may have been a government security risk, and it illustrates how often the tradeoffs between security and convenience actually play a role in our lives.

On the most basic level, companies try to lay out what users should and should not expect through terms of use and privacy policies. But these documents can be difficult to read and are almost always complicated. Goldberg emphasizes that even informed consumers may not fully understand what they’re agreeing to when they elect to sign into a third-party service with Facebook, for example, or decide to share their location with a fitness app. “In the digital space, people may not fully comprehend the implications of how easy it is to aggregate and analyze information,” he says. “Companies can learn much, much more than people think they’re revealing.”

There are a few ways that companies could be required to deliver straightforward security disclosures to users. One would be through government regulations, much in the way we have safety warnings on cleaning products and nutrition facts on food. Steve Wilson, a vice president and principal analyst at the Silicon Valley firm Constellation Research, says it’s not enough for a company to disclose that it tracks user behavior or personal data. Companies should also be frank about why they’re collecting that data and what they want to use it for. “The standard line is, Consumers are not fools. We heard [it] from Big Tobacco in the 1970s and ’80s and we still hear it today,” he says. “If businesses seriously think that consumers are smart, then let’s … tell them what’s going on. That’s my challenge to business. What harm would come? What are you scared of?”

Goldberg is less keen on government regulation. “I guess like almost everybody in my industry, I kind of shrink at the notion of a government body approving your security products,” Goldberg says. “I’d like to see a number of credible voices emerge. Basically what I expect is various competing bodies that try to make their criteria of evaluation clear.”

Advertisement

Currently, individual programmers can get certifications that reflect training in cybersecurity best practices. There are also groups—like the British industry nonprofit CREST—that offer certifications to cybersecurity consulting businesses. And there are international frameworks, like the U.S.-EU Safe Harbor data privacy protocol, that require companies to self-evaluate their policies. What hasn’t successfully emerged in the software industry is a widely adopted independent accreditation body (like LEED or Consumer Reports) to offer trusted information to consumers.

One notable attempt on the data privacy side is True Ultimate Standards Everywhere, or TRUSTe. The group was founded in 1997 as a nonprofit industry body to help financial websites (and other services that deal with sensitive information) adopt best practices and self-regulate. “You could liken us to an audit-type firm in the nonprofit days,” CEO Chris Babel says. TRUSTe evaluates and comments on companies’ privacy measures, including the information they collect about their customers, how they share it, and with whom. Then the group grants its seal to companies that meet certain privacy criteria. TRUSTe doesn’t evaluate security methods that encrypt data, for example, or that keep hackers out of a network.

The TRUSTe seal has proliferated across the Web, but hasn’t entered the cultural consciousness to the extent that, say, Good Housekeeping’s seal has. Part of the reason may be that TRUSTe’s role has been slightly murky since it became a for-profit company and took venture capital funding in 2008. Babel says the group wanted to be able to hire engineers and actually develop privacy evaluation software so TRUSTe could do more stringent assessments and also offer companies insight into their own systems.

But in November, TRUSTe settled with the Federal Trade Commission on charges that it had misrepresented its recertification program for privacy practices and that it had perpetuated the misconception that it was still a nonprofit. As part of the settlement, the company agreed to submit transparency reports to the FTC and paid a $200,000 fine. Babel says the company regrets the mistakes but took swift and appropriate action to resolve them. “We continue to see consumers look for the seal,” he says. “We work and get paid for by the enterprises, yet we really help them do the right things by the consumer. It’s a unique place that we hold in the industry. Trust on both sides is critical to our business.”

Advertisement

That’s the inherent problem with these types of independent accreditation bodies: It’s difficult for them to be truly autonomous and reconcile potential conflicts of interest. “There have been efforts to do this and for the most part they haven’t gone very well,” says Lorrie Cranor, a professor at Carnegie Mellon University who studies security usability. “[TRUSTe] has the word trust in it! … But there have been a number of cases where TRUSTe has not done the job they said they would do. Most recently the Federal Trade Commission [probe].”

To Cranor, government regulation is necessary for enforcing cybersecurity best practices. “The industry has been saying for 20 years now that it can self-regulate and there’s no need for additional legislation or regulation,” she says. “They’ve had 20 years to prove themselves, and I don’t think they have proven themselves.”

In the absence of a reliable disclosures, the burden of personal online security largely falls to users. The simpler and more straightforward the demands on them are, the more likely they are to comply. And one of the most important areas to address is passwords.

Much like the discussion over security disclosures, the debate over authentication has always been heated, and it rages on. In 2012 four prominent cryptographers published a paper at the IEEE Symposium on Security and Privacy creating a framework for analyzing Web authentication approaches and then evaluating a diverse group of methods. “Some schemes do better and some worse on usability. … But every scheme does worse than passwords on deployability,” they wrote in the conclusion. “[W]e are likely to live considerably longer before seeing the funeral procession for passwords arrive at the cemetery.”

But new approaches to authentication, like using biometrics or physical “keys,” show promise. For example, the FIDO (Fast IDentity Online) Alliance is a nonprofit trade group that is working on an open standard for these types of authentication alternatives. In December it released its first version, FIDO 1.0, so companies—like Microsoft—could begin supporting new types of authentication in uniform and interoperable ways. Constellation Research’s Wilson is excited about FIDO because he says that it’s hard to beat physical keys as a model for security. “You have something in your hand, you stick it into a slot, you turn it clockwise, and something happens,” he says. “[It’s] the real two-factor authentication.”

Both password supporters and advocates of alternative authentication techniques agree that no matter how the debate resolves, it isn’t the only component of strong security. “FIDO and other multifactor techniques are going to be the answer, but they’re not going to solve everything,” says Enrique Salem, the managing director of Bain Capital Ventures and the former president and CEO of Symantec (the security company that makes Norton Antivirus).

The tension between security and convenience can never be fully resolved. But new approaches like password managers and FIDO are promising. Perhaps companies will learn to take their own digital security seriously due to the endless parade of high-profile corporate breaches. Will they ever realize that disclosing their products’ goals to consumers is the decent thing to do? The answer might be “when hell freezes over”—or, as some in the industry might put it, “when passwords are dead.”