Future Tense

Hire (Some of) the Hackers

The U.S. government needs cybersecurity experts who have thought like intruders.

Takes one to know one.

Photo by Sergey Peterman/Shutterstock

One of my computer security students came up to me after class recently, worried about his ongoing security clearance for a government internship. “I had a small run-in with the IT staff on campus a few years ago,” he told me. “They just gave me a warning, there were no repercussions or anything, but do you think that will affect my clearance?” We had just spent a week in class discussing the Computer Fraud and Abuse Act and reading about all the ways that computer-based misbehavior can land you in serious trouble with the law. So it wasn’t entirely surprising that some of my students—almost all of whom are computer security majors—would be thinking about the potential consequences of their technical expertise.

I told my concerned student (truthfully) that I didn’t know a lot about what went into clearance decisions and that the most important thing was to be completely honest and transparent about the incident with the investigators. If he did that, I said, I thought he would probably be fine. So it was with some discouragement that I read the Sept. 3 story in Federal Times about how hard it was for government agencies to find people with cybersecurity expertise who could pass the requisite clearance process.

“Many potential employees honed their cyber skills engaging in activities that are less than legal,” Aaron Boyd writes in the piece, adding that “rules on past behavior might exclude some of the top talent the government is interested in recruiting.” As an example, Boyd cites one anonymous applicant to the FBI who thinks she was denied a job because she confessed to downloading copyrighted files.

Frankly, I find that difficult to believe. The official FBI employment eligibility criteria say nothing at all about file sharing or other forms of computer misuse. Instead, they’re mostly focused on the question of whether potential applicants have used marijuana at any point during the last three years or any other illegal drug in the past 10 years.

But drug habits aside, if you’re hiring only cybersecurity experts who have never even so much as dabbled in testing the (very, very unclear) limits of legal computer-based behavior, then you’re probably doing something wrong. Cybersecurity experts need to anticipate the online actions of your enemies and defend against them, to find holes in your computer systems and vulnerabilities in your code. If your defenders have never thought like intruders, then you’re probably in trouble. Or, more accurately, we’re probably in trouble—all of us who rely on the U.S. government to protect various critical computer systems and networks.

I’ve written before about the value of enlisting the help of even hardened cybercriminals with fighting cybercrime. It is risky to trust anyone who has criminal tendencies—one need look no further than Albert Gonzalez, who was recruited as a Secret Service informant to help track down cybercriminals and, while serving in that capacity, planned a series of very large, very lucrative breaches of payment card information from retailers including (most famously) T.J. Maxx. Yet I’m still inclined to believe that it’s a mistake to write off anyone with a slightly questionable history of testing the limits of acceptable computer use.

Certainly, some people’s pasts may provide ample reason not to offer them employment with the federal government. (I, for one, don’t think Gonzalez should have another chance to give back to society.) But the government should approach people’s youthful technological experimentation with a little less judgment than, say, their history of drug use or interactions with foreign governments. Experimenting with computers systems in ways that are perhaps not always appreciated might be evidence of an inquiring mind and an interest in security, rather than weak morals, criminal intentions, and a likelihood of committing treason.

That may mean we make some mistakes—but the U.S. government already makes mistakes in issuing clearances. And it’s not at all clear that a more rigorous clearance process is the best way to prevent leaks. Presumably, people in the U.S. intelligence community are working to make sure the government doesn’t experience another breach on the scale of Edward Snowden’s. And perhaps they believe a better background check process would have flagged the Electronic Frontier Foundation and Tor Project stickers on his laptop.

But that seems pretty nonsensical: For one thing, supporting the EFF and Tor is not radical or dangerous. But perhaps more importantly, the key to preventing massive leaks is much more likely to be better technical controls that monitor how much information and access any individual employee is using at any given time, as well as social controls requiring the authorization of multiple employees to access especially sensitive data and systems. Placing technical constraints on what individual workers can and cannot do within the context of government systems seems like a far better use of time and resources than scrutinizing everyone’s pasts (and laptops) that much more closely for any hints of eccentricity or disobedience.

The security clearance process is not by any means worthless. When you’re hiring people to handle sensitive information and important missions, it makes sense to pay attention to the red flags raised by their friends and co-workers, or by their financial and criminal histories. But it also makes sense to relax the standards for computer conduct a little when it comes time to hire people with a knack for computer security.

If the U.S. government wants to employ a formidable cybersecurity workforce, it will need people who, in the words of Aaron Boyd from the Federal Times, “honed their cyber skills engaging in activities that are less than legal.” It will need people who can think like its enemies—not just people who follow rules to the letter and sail through security clearances.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.