Oracle CSO Mary Ann Davidson doesn’t want to hear about software vulnerabilities.

Oracle’s Chief Security Officer Hates It When People Flag Software Bugs. That’s Absurd.

Oracle’s Chief Security Officer Hates It When People Flag Software Bugs. That’s Absurd.

The citizen’s guide to the future.
Aug. 13 2015 6:00 AM

Don’t Bug Me

Oracle’s chief security officer hates it when people flag software vulnerabilities. That’s absurd.

Mary Ann Davidson, Chief Security Officer for Oracle Corporation,Mary Ann Davidson, Chief Security Officer for Oracle Corporation.
Mary Ann Davidson doesn’t need help from the likes of you. Above, the chief security officer for Oracle is pictured in March 2003.

Photo by Tamara Voninski/Fairfax Media via Getty Images

People who make things and people who break things are not natural allies. But gradually, the people who make software and the people who look for ways to break—or exploit—it have been coming to terms with the idea that they can have a mutually beneficial relationship rather than a hostile one. That’s why the now-deleted Monday blog post by Oracle security chief Mary Ann Davidson, in which she railed against users alerting Oracle to vulnerabilities, was so distressing to so many in the computer security community. (Oracle Executive Vice President Edward Screven later said that the post was removed because it “does not reflect our beliefs or our relationship with our customers.”)

The post made clear that Davidson hates it when people write in to tell her that they’ve found flaws in Oracle’s products, and that she finds no value whatsoever in their contributions. Furthermore, she mentions several times that looking for vulnerabilities violates Oracle’s license agreement.


She explained:

I’m not beating people up over this merely because of the license agreement. More like, ‘I do not need you to analyze the code since we already do that, it’s our job to do that, we are pretty good at it, we can—unlike a third party or a tool—actually analyze the code to determine what’s happening and at any rate most of these tools have a close to 100% false positive rate so please do not waste our time on reporting little green men in our code.’ I am not running away from our responsibilities to customers, merely trying to avoid a painful, annoying, and mutually-time wasting exercise.

This statement is so arrogant, insulting, and ignorant that it’s hard to believe it could have been written by a high-powered security official at a huge technology company. The claim that Oracle can, on its own, find all the vulnerabilities in its products is nonsense. No tech company in the world is equal to the task of shipping bug-free code. The idea that no one outside of Oracle could have the expertise or ability to find relevant exploitable coding errors in the company’s products is similarly ridiculous: Independent security researchers routinely find important vulnerabilities in commercial products made by companies they don’t work for. And while it is no doubt true that some of the reports Davidson and her team receive are false alarms, the notion that assessing and responding to these concerns is a waste of her time demonstrates a fundamental misunderstanding of the value provided by people who devote their time and energy to finding and reporting software vulnerabilities.

Davidson’s post comes at a moment when major technology firms are making strides toward recognizing that value, and even paying for it. Take, for instance, the increasing number of tech companies willing to pay so-called bug bounties to people who find vulnerabilities in their products. The rationale behind this trend is simple: The people who find flaws in your code or hardware have done you a tremendous service; they have devoted their time and expertise to finding the vulnerabilities and then forgone potentially considerable profits on the black (or gray) markets to report those vulnerabilities back to you, so you can better protect your customers.


Paying those people for their time (and their decision to use their skills for good) makes perfect sense if you want to encourage the wider world to help you test your products for flaws and report those flaws back to you. But Davidson describes this approach in only the most dismissive terms. “Bug bounties are the new boy band (nicely alliterative, no?) Many companies are screaming, fainting, and throwing underwear at security researchers to find problems in their code,” she writes, continuing:

Ah, well, we find 87% of security vulnerabilities ourselves, security researchers find about 3% and the rest are found by customers. (Small digression: I was busting my buttons today when I found out that a well-known security researcher in a particular area of technology reported a bunch of alleged security issues to us except—we had already found all of them and we were already working on or had fixes. Woo hoo!)

This is, again, a strange and misguided argument to hear being made by a security professional.  For one thing, Davidson assumes that it’s a good thing that Oracle is getting only a few external reports of security vulnerabilities from researchers. Of course, if she’s busy replying to everyone who reports a vulnerability with a message about how they’ve violated Oracle’s license agreement, it’s no surprise that the company receives relatively few such reports from outside researchers.

For another, she completely disregards the difference between vulnerabilities detected internally by Oracle’s own employees before products are rolled out to customers and the ones that are found after the company’s code goes live. I’m sure that Oracle has a terrifically talented team of security engineers testing its code and looking for bugs throughout the development phase—and that’s important, to be sure. But the bugs that that team doesn’t find, the bugs that are still there when code is released and begins being used, are still hugely important—even if there are far fewer of them.


Those are the bugs that could enable actual security breaches, the bugs that have made it past the eyes of Oracle’s many engineers and could therefore impact the company’s many customers. Even if there are only three of those bugs for every 87 that were identified earlier in the development process—you still want to find them! (As a side note, using number of vulnerabilities as a metric for anything in security is generally pretty meaningless—software is not more or less secure because of how many vulnerabilities it has, but because of how severe the consequences are of exploiting those vulnerabilities.)

I’m not advocating that tech companies should rely on independent security researchers instead of security-focused employees. The former should augment—not replace—the latter. But to believe that your security team is so good as to require no assistance from the outside world is the height of hubris—and stupidity.

Needless to say, criminals don’t care about your license agreement or your Common Criteria certifications or FIPS-140 certifications. (Davidson references both standards in support of her argument that Oracle knows more than you do about security, so don’t waste her time by trying to help the company.) So discouraging outsiders from looking for bugs just gives the bad guys a leg up in finding ways to exploit your products. Even worse, by blatantly insulting security researchers and telling them you don’t want their help, Davidson runs the risk of dissuading them from reporting their discoveries through legitimate channels and encouraging them to instead sell those Oracle vulnerabilities on the black market, or exploit the bugs themselves. As a general rule, it is a bad idea to piss off the people who make a career of finding computer security vulnerabilities.

Computer security is rife with biological metaphors—beginning with the notion of computer viruses and infections—so it’s always tempting to return to that language, to consider the different biological relationships among organisms: the parasitic, the commensal, and the symbiotic. For those who have forgotten their high school biology, parasitic relationships are those in which one organism benefits and the other is harmed—think tapeworms and humans, or people who find security flaws and exploit them to steal millions of dollars. In commensal relationships, one organism benefits and the other is not affected—for instance, barnacles that attach themselves to whales, or security researchers who report vulnerabilities, benefiting a vendor, but receive no compensation for their findings. Finally, there are symbiotic, or mutualistic, relationships, in which both parties benefit—the plover birds that clean the teeth of crocodiles, or the security experts who report useful vulnerabilities to vendors and are paid and recognized for their work. Davidson’s post is mired in a parasitic view of the relationship between security researchers and software vendors, while other firms are gradually trying to develop a more symbiotic model.

Or, for another biological metaphor, consider the notion of waning immunity in medicine—when people’s resistance to a disease diminishes over time so they have to be periodically re-exposed to the infection to build up their antibody levels and maintain their protection. Oracle, too, is better protected when it’s open to regular exposure to the prying eyes of the outside world—as well as the insights and expertise those eyes have to offer. Perhaps, as Davidson suggests, Oracle will already know about some of those outside discoveries, perhaps some of them won’t be relevant or useful at all, but almost certainly some of them will be news to Oracle and dangerous for its customers—especially if the company continues to cling to an outdated and backward security posture.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.