Last week, the controversial cybersecurity bill known as the Cyber Intelligence Sharing and Protection Act passed the House of Representatives. CISPA, which would provide a mechanism for the government and private companies to share information regarding cyberthreats, has the support of hundreds of companies. However, civil liberties organizations including the Center for Democracy and Technology, the Electronic Frontier Foundation, and the ACLU are strongly (and justifiably) opposed to the bill on privacy grounds. For example, CISPA could allow companies to give private communications such as emails to the government, with no judicial oversight, if they contain what is deemed to be “cyber threat information.” The White House has threatened to veto the bill, expressing concern over its lack of “privacy, confidentiality, and civil liberties safeguards.”
Depending on whom you ask, cyberwar is either the “next threat to national security,” as the book by Richard Clarke and Robert Knake was titled, or “more hype than hazard,” as Thomas Rid of Kings College recently wrote in Foreign Policy.
But set aside the debate over how serious a threat cyberwar may be and the question of how to ensure security without sacrificing individual privacy. Instead, let’s focus on a fundamental technological shift that has occurred while most of us weren’t looking: Over the last decade or so, thoroughly analyzing the world’s data to identify potential cyberthreats has gone from difficult to impossible. The volume of digital information has become far too large.
This shift completely redefines the cybersecurity problem. When the task of finding cyberthreats was merely becoming more difficult, it was always possible to respond by getting a bigger budget, buying more computers, and hiring more analysts. But the old solutions don’t scale any more. The idea underpinning CISPA—that the government should sit at the center of the cybersecurity universe, collecting all of the information about cyberthreats, analyzing it, and dispensing solutions—will no longer work. There are too many data. The government can be an essential supporting actor in the effort to secure American networks and to prevent intellectual property theft. But it can’t, and shouldn’t try to be, the orchestra conductor.
According to the EMC-sponsored 2011 IDC Digital Universe Study, 1.8 trillion gigabytes of data were created or replicated in 2011—an amount that IDC described as equivalent to “every person in the world having over 215 million high-resolution MRI scans per day.” Cisco has projected that by 2015, 1 million minutes of video will cross global networks every second, and that there will be twice as many networked devices as there are people in the world.
Who is capable of thoroughly analyzing all of that traffic—or at least the subset that passes through American networks and companies—to identify potential cyberthreats against the United States? No one. Not the U.S. government. Not companies working with the government. It is simply not possible.