Future Tense

An Enormous Game of Capture the Flag Could Change Cybersecurity Forever

The Cyber Grand Challenge team monitors submissions during the second scored event.

Photo from DARPA

The Defense Advanced Research Projects Agency (DARPA) announced in 2013 that it was launching a “Cyber Grand Challenge” to explore the idea of automating cybersecurity. If computers themselves could identify vulnerabilities and create working patches, the timeline for dealing with bugs (like Heartbleed) could shrink from days or weeks to seconds. But it was a pretty radical idea, and it wasn’t clear what would come out of it. Now, more than a year later, the project is headed into its first major competition, and the preliminary results are promising.

The Grand Challenge is modeled on a type of hacking competition called Capture the Flag, but instead of being played by humans, the DARPA version will be played by autonomous computers created by human teams. So far, teams have been able to voluntarily particpate in practice sessions called scored events. DARPA program manager Mike Walker says that the first scored event in December (after the teams had been working for seven months) was a “rough experimental prototype start” that produced about three functional patches.

Things changed at the second scored event in April, though. The teams competing were able to definitively confirm bugs in 23 out of 24 pieces of test software, and they produce patches in all of the software. Walker isn’t getting cocky—he knows that the work is far from over. But he says, “We think we’re doing well.”

On June 3, 104 teams will compete in the Grand Challenge qualifying round to identify seven finalists for the summer 2016 final showdown. In digital Capture the Flag, all teams are given the same software at once (this is like the field in traditional Capture the Flag), and the software contains data that needs to be defended (like a flag in the physical game). When the teams discover vulnerabilities in the code, they have to decide whether to use the weakness to attack their opponents, begin working on a patch to defend their own data, or find a way to do both simultaneously. The game organizers feed the teams more data throughout the game. The scoring is based on how many “flags” the teams take from others and how many of their own they can defend to the end.

The cybergame is tough for humans, but designing computer programs that can play autonomously is a whole other level of difficulty. “There are three incrementally harder things,” for computers to do autonomously, Walker says. “One is ‘I think there’s a bug and I tell a human and a human figures out if it’s true.’ The other is ‘I’m certain there’s a bug and I can prove it.’ … And even harder than that is ‘I know how to fix the problem without breaking the software.’ That’s hardest of all.”

The winners of the Cyber Grand Challenge will take home $2 million for first place, $1 million for second place, and $750,000 for third place. It’s a lot of money at stake for something that was barely  conceivable a few years ago. “When [DARPA’s director of the Information Innovation office Dan Kaufman] asked me … could machines play this game? I just said I don’t know. It was a completely new thought,” Walker said. “Everyone I asked, no one knew. And when no one knows it’s an interesting problem.”