Future Tense

Human Weakness in Cybersecurity

Hackers don’t need to access classified email to get important information.

The U.S. military Joint Chiefs of Staff.
The U.S. military Joint Chiefs of Staff testify before the Senate Armed Services Committee on Capitol Hill on May 6, 2014.

Photo by Chip Somodevilla/Getty Images

The Joint Chiefs of Staff unclassified email system is now back online, after having been down for more than two weeks, following a breach that some officials have blamed on the Russians. Media reports have stated that no classified information was stolen in the attack. But that isn’t quite as reassuring as it might seem: A great deal of metadata and surrounding context can still be inferred from unclassified emails. These inferences might include the social connections between people, the names of projects a person is working on, how emails are formatted, and what jargon a person uses. On the surface, this kind of information might seem innocuous. However, in the hands of a skilled and patient adversary, this information can be used to exploit human weaknesses in cybersecurity.

It could particularly be used in spear phishing—email-based attacks targeting specific organizations or individuals. The goal of spear phishing is to fool people into circumventing all of their own cybersecurity defenses, tricking people into sharing their password, replying with sensitive documents, or installing malware. One notorious example of spear phishing was the breach at the company RSA back in 2011, in which attackers successfully sent a malware-laden spreadsheet named “2011 Recruitment Plan” to two small groups of employees and possibly stole information about the company’s cybersecurity products.

Spear phishing was also used to initiate the recent data breach at the Joint Chiefs of Staff and is surprisingly effective in practice. In past research, both at Carnegie Mellon University and at my company, Wombat Security Technologies, my colleagues and I conducted numerous studies to understand why people fall for these fake emails. We found that a spear phish that has only minimal personalization will still get about 50 percent of people to log in with their username and password.

Existing countermeasures, like email filters and anti-virus software, are getting better at detecting these kinds of fake emails and blocking fake websites—which means that spear phishers have to up their game. One approach is to target specific individuals, sending out just a few fake emails to stay under the radar. However, without a lot of context, it’s easy to do this clumsily. For example, one of my fellow professors once got a formally written email asking for a letter of recommendation. My colleague immediately spotted it as a fake because he was good friends with the person being impersonated and knew that he would never write such a formal email. By spending more time to learn about each potential target, an attacker can use the right kind of language and make requests that seem like part of the normal flow of work.

Media reports indicate that the president’s daily schedule was part of the data breach on the Joint Chiefs of Staff. Let’s say that our attackers, now apparently shut out of the system, wanted to try to get the latest daily schedule. They might come up with a list of potential targets and choose someone who does not seem to be computer savvy. Next, they might figure out who the potential target knows and select a person who might have a plausible need for the schedule. Lastly, they might send a fake email with an urgent request, using the writing style and formatting of the person being impersonated.

Attackers can also use information inferred from the unclassified emails to help with gathering data from social media. For example, back in 2012, it was reported that several senior British military officers and Defense Ministry personnel had friended a fake Facebook profile of a U.S. Navy admiral. While this specific fake profile was detected in fairly short time, it’s not hard to imagine using inferred social network connections on email to create less obvious fake profiles. Digging through the emails, an attacker might find someone you met once about six months ago, create a fake profile, and then send a friend request. While this seems like a lot of work, keep in mind that the threat model here is someone who is very patient and willing to spend a lot of time and energy to acquire sensitive information.

Why would attackers care about social media? One reason is to hone their spear-phishing attacks. If an attacker wants to send fake emails in your name, they can do a better job impersonating you if they know more about your interests, your writing style, and who your friends are. They might also use the social media’s messaging services to contact potential targets rather than email, to bypass any anti-phishing filters in place.

Another reason is for gathering intel. While most people are careful about not directly leaking sensitive information on social media, they still might inadvertently disclose useful information. Examples include geotagged photos (indicating where this person has been and what was there) and the names of projects people have worked on in the past. Chris Soghoian, the principal technologist at the American Civil Liberties Union, has pointed out that you can do a search on LinkedIn of a few publicly known National Security Agency programs and discover the names of many others that have not yet been disclosed. Doing a bit more work, you could probably figure out who has worked on what, and guess which programs are most important (based on the number and kinds of people working on them) and what those programs are (based on the skills of those people). Using public sources, an attacker could likely use the stolen emails to do the same kind of analysis for programs at the Department of Defense.

Intelligence gathering using social media isn’t just hypothetically possible either. At the 2009 Black Hat security conference, Thomas Ryan presented his research about creating a fake person on LinkedIn named Robin Sage, a young, attractive, and highly educated woman with specialties in information security. Ryan wanted to probe how well people could spot the fake person and see how much sensitive information he could learn. As a strategy, Ryan friended lots of other people first, so that an intended target would see that they had many friends in common and thus be more likely to approve the connection. Ryan found that few people spotted the fake. In fact, several people offered Robin gifts and corporate jobs, some even sharing potentially sensitive documents with her. (One possible lesson here is to be selective about who you friend and what you share on social media sites. Another is that men can be really dumb when it comes to attractive women.)

The danger of seemingly innocuous data isn’t just an issue for the Department of Defense—it’s actually a universal cybersecurity issue for all organizations. All of the attacks above exploit weaknesses in the human element of cybersecurity, which is perhaps the most difficult part of cybersecurity since people vary widely in terms of their skills, awareness of security threats, and motivation to be secure. But ultimately, the human element may also be the most important part today. Nearly every major data breach we have seen in the past few years was due to a human failure. It really doesn’t matter how many firewalls, certificates, or two-factor authentication mechanisms or how much encryption software you have if the person behind the keyboard falls for an attack.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.