Future Tense Newsletter: How to Look Hot on Mars
In the past week, Future Tense explored survival in space, the latest cybersecurity developments, and more. Join us, won’t you?
The Martian may be pure hokum, but according to Ellen Stofan, NASA’s chief scientist, if you look at real science of the movie, it’s not entirely implausible hokum. In fact, as Rachel Gross wrote, the film practically worships science, but it also maintains its sense of wonder. What did I think? I [redacted] loved every minute of it.
Maybe we still had space on the brain, because we also checked in with the designers who are trying to make spacesuits sexier. If we’re headed in that direction, though, we may want to rethink the often-sexist language of spaceflight.
Meanwhile, it’s the most wonderful time of the year: National Cyber Security Awareness Month! As part of the festivities, there was a scary new attack on Microsoft Outlook’s Web application. Ross Schulman discussed three laws that Congress should change to let cybersecurity researchers do their jobs. In the U.K., meanwhile, they’re attempting to woo a new generation of hackers with a buggy, terrible game. Good luck with that, folks.
Here are a few other stories that were really having a moment this week:
- Digital rights: Rebecca Wexler argues that defendants should have the right to inspect code used to convict them.
- Opting out: Verizon is sharing private, identifying information about its subscribers with advertisers. We explained how you can opt out.
- Butt dials: Our phones may be smarter than ever, but that doesn’t mean they aren’t calling 911 from our pockets.
- Smart cities: Streetlamps of the future may be outfitted with sensors, cameras, and more. Does everything have to be connected to the Internet?
for Future Tense
Court Strikes Down Data-Transfer Pact That Lets Tech Companies Move European User Data
On Tuesday, the highest court in the European Union ended a data-transfer agreement known as the "Safe Harbor" pact after 15 years of use.
The agreement allowed tech companies to move user data from European to United States data centers if the companies offered certain privacy settings and met other minimum requirements. Eliminating the pact is a win for privacy advocates, who criticized it for potentially exposing EU user data to U.S. surveillance. But it could have ramifications for the tech industry, since companies will now have to rely on data centers that are physically in the EU or find other legal justifications to allow data to flow to the United States.
The Wall Street Journal estimates that roughly 4,500 companies, from tiny startups to tech giants, were invoking the pact in their daily operations. Some, like Microsoft and Facebook, have backup plans, but small companies with limited resources may struggle to implement new strategies. The worst-case scenario would be that European customers can't use certain U.S. services, leading to problems for international trade.
Brian Hengesbaugh, a privacy lawyer with Baker & McKenzie in Chicago who worked on the original pact, told the New York Times, “We can’t assume that anything is now safe. ... The ruling is so sweepingly broad that any mechanism used to transfer data from Europe could be under threat.”
The decision fits into broader discussion about how to defend users' privacy rights at the largest scale, though. "Today’s Judgment puts people’s fundamental right to privacy before profit," Renata Avila, the global campaign manager of the World Wide Web Foundation, said in a statement. "We hope that this EU ruling will also inspire countries around the world to review their data protection and exchange policies, and enhance the protection of their citizens."
Here’s How To Opt Out of Verizon’s Scary New Privacy Violation
Maddeningly, this intrusive policy shift requires that users actively opt out if they don’t want to be directly monitored. As Angwin notes on Twitter, you can exempt yourself from the initiative here (she and Larson point out that you can also call 866-211-0874), though you’ll still have to manually log into your account or otherwise wrestle with Verizon customer service.
This creepy corporate synergy comes on the heels of Verizon’s $4.4 billion purchase of AOL earlier this year. In its initial Privacy Notice, Verizon coyly suggests that this isn’t that big of a deal. One sentence reads, “We do not share information that identifies you personally as part of these programs other than with vendors and partners who do work for us.” That’s an awfully big “other than.” Per ProPublica, “AOL’s network is on 40 percent of websites,” which should make for quite a few “vendors and partners.”
Given the season, this news has an appropriately haunting character: It appears to be connected to controversial “zombie cookies” that relied on undeletable information buried in Verizon phones and tablets to track customers’ browsing habits, even if the user deleted the cookie. Though the company responsible for those cookies supposedly killed off the program after protests, the technology that empowered it seems to have risen once again. Privacy advocates should have gone for a shot to the head the first time around. When you’re dealing with zombies, it’s the only way to be sure.
Convicted by Code
Secret code is everywhere—in elevators, airplanes, medical devices. By refusing to publish the source code for software, companies make it impossible for third parties to inspect, even when that code has enormous effects on society and policy. Secret code risks security flaws that leave us vulnerable to hacks and data leaks. It can threaten privacy by gathering information about us without our knowledge. It may interfere with equal treatment under law if the government relies on it to determine our eligibility for benefits or whether to put us on a no-fly list. And secret code enables cheaters and hides mistakes, as with Volkswagen: The company admitted recently that it used covert software to cheat emissions tests for 11 million diesel cars spewing smog at 40 times the legal limit.
But as shocking as Volkswagen’s fraud may be, it only heralds more of its kind. It’s time to address one of the most urgent if overlooked tech transparency issues—secret code in the criminal justice system. Today, closed, proprietary software can put you in prison or even on death row. And in most U.S. jurisdictions you still wouldn’t have the right to inspect it. In short, prosecutors have a Volkswagen problem.
Take California. Defendant Martell Chubbs currently faces murder charges for a 1977 cold case in which the only evidence against him is a DNA match by a proprietary computer program. Chubbs, who ran a small home-repair business at the time of his arrest, asked to inspect the software’s source code in order to challenge the accuracy of its results. Chubbs sought to determine whether the code properly implements established scientific procedures for DNA matching and if it operates the way its manufacturer claims. But the manufacturer argued that the defense attorney might steal or duplicate the code and cause the company to lose money. The court denied Chubbs’ request, leaving him free to examine the state’s expert witness but not the tool that the witness relied on. Courts in Pennsylvania, North Carolina, Florida, and elsewhere have made similar rulings.
We need to trust new technologies to help us find and convict criminals but also to exonerate the innocent. Proprietary software interferes with that trust in a growing number of investigative and forensic devices, from DNA testing to facial recognition software to algorithms that tell police where to look for future crimes. Inspecting the software isn’t just good for defendants, though—disclosing code to defense experts helped the New Jersey Supreme Court confirm the scientific reliability of a breathalyzer.
Short-circuiting defendants’ ability to cross-examine forensic evidence is not only unjust—it paves the way for bad science. Experts have described cross-examination as “the greatest legal engine ever invented for the discovery of truth.” But recent revelations exposed an epidemic of bad science undermining criminal justice. Studies have disputed the scientific validity of pattern matching in bite marks, arson, hair and fiber, shaken baby syndrome diagnoses, ballistics, dog-scent lineups, blood spatter evidence, and fingerprint matching. Massachusetts is struggling to handle the fallout from a crime laboratory technician’s forgery of results that tainted evidence in tens of thousands of criminal cases. And the Innocence Project reports that bad forensic science contributed to the wrongful convictions of 47 percent of exonerees. The National Academy of Sciences has blamed the crisis in part on a lack of peer review in forensic disciplines.
Nor is software immune. Coding errors have been found to alter DNA likelihood ratios by a factor of 10, causing prosecutors in Australia to replace 24 expert witness statements in criminal cases. When defense experts identified a bug in breathalyzer software, the Minnesota Supreme Court barred the affected test from evidence in all future trials. Three of the state’s highest justices argued to admit evidence of additional alleged code defects so that defendants could challenge the credibility of future tests.
Cross-examination can help to protect against error—and even fraud—in forensic science and tech. But for that “legal engine” to work, defendants need to know the bases of state claims. Indeed, when federal district Judge Jed S. Rakoff of Manhattan resigned in protest from President Obama’s commission on forensic sciences, he warned that if defendants lack access to information for cross-examination, forensic testimony is “nothing more than trial by ambush.”
Rakoff’s warning is particularly relevant for software in forensic devices. Because eliminating errors from code is so hard, experts have endorsed openness to public scrutiny as the surest way to keep software secure. Similarly, requiring the government to rely exclusively on open-source forensic tools would crowd-source cross-examination of forensic device software. Forensic device manufacturers, which sell exclusively to government crime laboratories, may lack incentives to conduct the obsessive quality testing required.
To be sure, government regulators currently conduct independent validation tests for at least some digital forensic tools. But even regulators may be unable to audit the code in the devices they test, instead merely evaluating how these technologies perform in controlled laboratory environments. Such “black box” testing wasn’t enough for the Environmental Protection Agency to catch Volkswagen’s fraud, and it won’t be enough to guarantee the quality of digital forensic technologies, either.
The Supreme Court has long recognized that making criminal trials transparent helps to safeguard public trust in their fairness and legitimacy. Secrecy about what’s under the hood of digital forensic devices casts doubt on this process. Criminal defendants facing incarceration or death should have a right to inspect the secret code in the devices used to convict them.
Twitter’s New “Moments” Feature Tries to Help You Make Sense of Twitter
For years Twitter has been the world's best tool for following live events as they unfold—provided you know where and how to look. But for the uninitiated, the social network has had a reputation for being frustrating or even alienating.
Moments, which is live now, is debuting on the second day of co-founder Jack Dorsey's return to Twitter as CEO. The timing reflects the substantial evolution Twitter needs as a company. If Moments is successful, it just might be the thing that finally gets Twitter growing again. But it doesn't, it will be in good company.
Moments appears as a little lightning icon on Twitter's mobile app and website between the tabs labeled "notifications" and "messages." Tap it and you'll load a screen that looks more like a snazzy mobile news site than a social network. There Twitter has compiled a selection of the most noteworthy stories and topics that people are discussing on the platform at any given time. Click on one, and you'll pull up a page that looks different from any other Twitter has offered. It is, as you might expect, a series of tweets handpicked by Twitter "curators" to tell the story in question—context and all. But you're not just seeing a screen full of tweets. Instead, each tweet appears as a full screen photo or video, with the text of the tweet itself functioning like a caption.
Importantly, the stories are available whether or not you're a Twitter user. Even if you aren't logged in, you can still find them just by visiting the Twitter homepage, where they'll replace the algorithmically selected tweets that Twitter debuted on it's logged-out homepage earlier this year.
Twitter's Moments are meant to be representative of the conversation happening around an issue, but they can also stand as content in their own right, and you can tweet, retweet, or favorite them. Share a link to a Moment, and it will appear in your followers' timelines with a preview card, the way any other story link would. The implication is that the Moments tab will become a sort of alternate universe Twitter, because when someone shares a Moment, clicking it will take you to the other tab. It's worth noting that historically, features like this that are only available on a separate tab have struggled to catch on with Twitter's users.
For now, most of the stories are being curated by a "very, very small" team of Twitter employees and product managers, Twitter product manager Madhu Muthukumar told us. But Twitter is also partnering with publications that will be able to curate their own Moments, share them on Twitter, and embed them on their own websites. Launch partners include the New York Times, BuzzFeed, and Major League Baseball, and you can see how Moments could eventually become a battleground for publishers who want their version of a particular story to surface prominently in the tab.
If all works as Twitter hopes, Moments will not only be a way into the site for new users, but will also be a way for experienced users to get caught up on a story that's been unfolding on the site. (Moments even has a "You're all caught up" message when you've viewed all of your newly available stories.) You'll also have the option to "follow" a story once you've looked at it in the Moments tab. For instance, if you follow a story about the South Carolina floods, Twitter will automatically drop new tweets from that story into your timeline as it develops. Once the story is over, you won't have to unfollow it—you'll just stop seeing those tweets.
The "Moments" feature is Twitter's attempt to address two of its biggest problems at once. One is that the site is always awash in content but lacks context. At any given time, the people you follow are having conversations about all different topics, most of which began before you logged in. You end up having to spend time figuring out what they're even talking about before you can grasp the substance of what they're saying.
The second problem Moments is trying to solve is that it's often hard to figure out whom to follow on Twitter. That's especially true for users who are new to the site, but even Twitter pros struggle to know where to turn when they're trying to get up to speed on a new topic or breaking news event. Take the floods as an example—you might have spent years building up a list of people to follow during breaking news events, and you might even follow some weather experts. But you're unlikely to follow the very people who happen to be on the ground in South Carolina this week. And then even if you do, their tweets could still be overshadowed by the sheer volume of other people you follow talking about unrelated things.
The idea of mining Twitter and aggregating the best tweets about a given story or topic is not new. It's been done for years by the likes of BuzzFeed and Mashable. But it hasn't been done by Twitter itself before, which always seemed a little bit odd. Now with the rise of services like Facebook Notes and Apple News—where content is increasingly being presented in a controlled ecosystem—it's gone from an oversight to a problem for Twitter. Moments is a step in the right direction. But it might take a few more steps to change the minds of all those people who tried Twitter in the past, found it confusing, and gave up.
The Real Science Behind The Martian
As a card-carrying space nerd and NASA’s chief scientist, I love space movies, from Star Trek to Star Wars to my all-time favorite—The Dish, an Australian comedy that celebrates that first moment when Neil Armstrong stepped down onto the surface of our moon. Up until now, I am somewhat embarrassed to admit my favorite Mars movie has been John Carter. But that now has been superseded by The Martian.
The Martian, set in the 2030s, covers the travails of the third human mission to Mars. Aided by NASA scientists, the film gets much right—from technologies NASA is developing now, to accurate Mars geography.
You may be wondering how realistic The Martian is. In fact, President Obama has stated a goal of getting humans in the Mars vicinity in the 2030s, and NASA is working hard to make this happen. Our journey to Mars has already begun, and we have put our most brilliant minds to work on each of its three main phases.
The first is what we call the Earth-reliant phase: technology development and human health risk reduction that we work on every day on the International Space Station. The one-year mission aboard the station right now, with astronaut Scott Kelly and cosmonaut Mikhail Kornienko, is a big part of this. We need more data on the long-term effects of microgravity on human health, and we will use the next several years on the ISS to develop countermeasures to health effects like bone density loss and muscle wasting. Mars missions will require up to three years in reduced gravity, so we need to make sure astronauts can not only survive but thrive, as they move outward to explore this new world.
The second phase of our journey to Mars takes place in what we call the proving ground in the early 2020s. We are preparing to send humans beyond low Earth orbit to the vicinity of the moon for the first time in decades using our new rocket, the Space Launch System, with the Orion capsule. In this proving ground out near the moon, we will test systems needed for Mars missions, like advanced life support systems and advanced propulsion, while still being able to return to Earth in days, rather than the 7–8 months it takes to get back from Mars. In the meantime, we will continue to explore Mars with robotic spacecrafts like the Curiosity rover, making measurements such as characterizing the radiation environment to ensure humans can safely explore the Martian surface. Also in the 2020s we will be testing advanced entry, decent, and landing—or EDL—systems at Mars that we will need to land human exploration payloads much larger than those we’ve landed with robotic explorers.
The third phase of our journey to Mars will look much like the Ares missions of The Martian. An international team of astronauts will orbit Mars, maybe visit the Martian moons, and eventually explore the surface of the red planet. We may be able to use the water we have discovered on Mars for rocket fuel and to help humans survive; that will allow us to live and work on Mars, conducting scientific research on the surface. Our astronauts will wear advanced space suits that give them increased mobility, bringing the intuitive thinking and flexibility to Mars exploration that only humans can. And then we will bring those astronauts safely home. As we visit Mars multiple times, we will build up infrastructure on the surface to expand the capabilities and reach of humans on Mars.
We can make this happen in an affordable way by joining forces with our international partners. In fact, 16 space agencies from around the world are already working together on this. We also will partner with the commercial sector in new and innovative ways and bring the public along through citizen engagement projects like technology challenges and prizes. The Martian may be fiction, but at NASA we are working to make it a reality.
One quibble I have with the movie is that it neglects to address the ‘Why Mars?’ question. This is the most important question behind everything NASA does. Part of why we explore is human nature—showing we can push ever outward toward the next frontier, developing new technologies and reaping the economic rewards as we do so. But as a planetary scientist and a lover of another science-fiction film, Contact, I also want humans to go to Mars to address the fundamental question, “Are we alone?” Mars is the most likely place where life evolved beyond Earth, with water stable on the surface for more than a billion years. Those first missions to Mars (from which we will bring everyone home!) will have astrobiologists and geologists from around the world searching for evidence that life evolved on Mars and what the implications of that life are for life here on Earth.
The Dish celebrates one of humankind’s greatest achievements. The Martian gets us ready for the next one that NASA is working hard to realize—the first humans to walk on the surface of another planet and help us to find life beyond Earth.
Study: 20 Percent of 911 Calls in San Francisco Are Butt Dials
From 2011 to 2014, San Francisco’s 911 dispatchers experienced a 28 percent surge in emergency calls. It wasn’t because of an increase in crime. Pranksters weren’t inundating the system. The real source? Butt dials.
According to a new report by Google, about 20 percent of all 911 calls made in San Francisco last year were pocket dials. As more people ditch landlines for smartphones—which are required to let users make emergency calls without having to unlock them—accidental emergency dials are on the rise. This is a big problem for 911 dispatchers who have to make sure the silence on the other end isn’t someone in danger. The extra investigation is straining an already overworked system.
Not only is it time-consuming for the dispatcher to take a long, silent butt dial, but it also exacerbates the follow-up process. The report found that it took an average of one minute and 14 seconds to determine if the call was accidental. Nearly 40 percent of the workers at a San Francisco call center said chasing down silent calls was the biggest “pain point” of their job.
Last year, FCC Commissioner Michael O’Rielly wrote a blog post suggesting that 50 percent of 911 calls were the result of butt dials. “Dedicated and hard-working public safety officials who answer and respond to Americans in times of need are being inundated by accidental wireless calls to 911," O'Rielly wrote. “This is a huge waste of resources, raises the cost of providing 911 services … and increases the risk that legitimate 911 calls—and first responders—will be delayed."
He suggested that wireless providers automatically send a text to 911 callers. “If consumers are alerted to the simple fact that they have dialed 911 accidentally, they may take precautions to prevent it from happening again,” he wrote. He also posed a penalty fee for repeat butt-dial offenders.
In the United Kingdom, emergency call centers adopted a system to quickly identify butt dials by prompting the caller to press “55” if they were there, according to the BBC. The technology helped reduce the volume of calls.
Google recommends automating the callback process for dispatchers and improving the way call centers keep track of accidental dials. Until 911 handlers find a solution, consider locking your phone or setting a passcode before jamming it back into your pocket. It might make an overworked dispatcher’s day a little better.
California Governor Bravely Vetoes Bill to Ban Drones From Interfering With Firefighters
If you’re like me, you’ve been reading the news out of California this summer and fall, as the state has been ablaze with wildfires, and asking yourself: “When will someone stand up for obnoxious hobbyists who interfere with emergency rescue operations by flying their drones over active wildfires?” Well, friends, our hero has finally come along, and he’s California Gov. Jerry Brown. This weekend, Brown took me by surprise by vetoing a bill that would have made it unlawful to operate a drone “in a manner that prevents or delays the extinguishment of a fire, or in any way interferes with the efforts of firefighters to control, contain, or extinguish a fire.” Obviously someone has never heard the old political adage “Voters hate unextinguished fires.”
The vetoed measure, SB 168, came into being this summer, after several drone hobbyists made news by getting in the way of rescue operations. SB 168 seemed about as uncontroversial as a bill can get, catering as it did to the public’s reflexive dislike for both wildfires and drone hobbyists. But Brown, who has consistently opposed intrusive drone regulations, vetoed it anyway. Over the weekend, he also vetoed two other drone-control bills, which would have made it a misdemeanor to operate drones in or over state jails and prisons and to operate drones without permission at low altitudes on public-school grounds, respectively. In September, he vetoed a bill that would have required drone pilots to obtain permission before flying their devices over private property at low altitudes. It’s probably not coincidental that California is the capital of the drone industry in the United States and that drones stand to add a lot of money to the state economy if the sector is allowed to flourish.
But there’s more going on here than just good old laissez-faire capitalism. In his veto message to the California state Senate, Brown said that each of the three vetoed drone bills, along with six other bills that he declined to sign, “creates a new crime—usually by finding a novel way to characterize and criminalize conduct that is already proscribed. This multiplication and particularization of criminal behavior creates increasing complexity without commensurate benefit.” Though improper drone usage presents a public menace, a greater menace, Brown apparently believes, is a cumbersome code of laws that makes it far too easy for the state to turn citizens into prisoners. I’m sympathetic to Brown’s logic.
If I’m reading them correctly, each of the three vetoed drone-control bills sought to criminalize behavior that was already broadly prohibited. In the state of California, it is already a misdemeanor to “engage in disorderly conduct that delays or prevents a fire from being timely extinguished” or to prevent emergency responders from discharging their duties. The state already prohibits people from sneaking contraband into prisons or attempting unauthorized communication with prisoners; the state already prohibits uninvited visitors from disrupting school activities.
Legislators’ attempts to get specific are a function of frustration, both with drone operators whose actions too often defy common sense and with a federal government that is taking its sweet time to come up with comprehensive regulations for an industry that desperately needs them. The point of explicitly stating “No drones allowed” is to remove any doubt that drone-related misconduct is prohibited; to make things clearer for cops and prosecutors who might not immediately know what to charge when some jerk accidentally crashes his drone into a busy schoolyard or into a rescue helicopter. These won’t be the last drone-control bills that Jerry Brown will have to consider, and while I appreciate his big-picture approach to drone regulations, it’ll only take one big drone-related tragedy for the governor to get burned.
This article is part of a Future Tense series on the future of drones and is part of a larger project, supported by a grant from Omidyar Network and Humanity United, that includes a drone primer from New America.
Wily Attack on Microsoft Outlook Is Especially Worrying Because Everyone Uses Outlook
Microsoft's Outlook email service isn't exactly, how do I put this, a favorite. Most people end up using it for work email at some point, but no one seems to really like it. As Gizmodo editor-in-chief Annalee Newitz put it in May, "Microsoft Outlook has the distinction of being one of the world’s most widely-used email and calendaring systems—and the one that arouses the most profound indifference in its users." So when a security issue crops up in Outlook, you might be tempted to just ignore it. But the whole ubiquity thing makes that really hard to do.
Take, for example, a new attack on the Outlook Web Application (Outlook's browser access) spotted by Ars Technica. A report released Monday from security firm Cybereason outlines a malware attack that sits on the Web app server and collects login credentials from a particular company or organization. Cybereason discovered the exploit after one of its clients noticed unusual activity on its network and had Cybereason scan its 19,000 endpoints (devices like laptops, smartphones, or any Internet-connected equipment).
The firm concluded that malware affecting the client had been strategically placed on a particular component of Microsoft's Exchange Server, which deals with Outlook email and calendar data. The malware offered a backdoor to decrypted HTTPS requests, exposing passwords and other data. Cybereason notes that its client was using the Outlook Web Application to allow for remote access (a common capability that allows employees to keep up with work email).
Contrary to other web servers that typically have only a web interface, OWA is unique: it is a critical internal infrastructure that also faces the Internet. ... This configuration of OWA created an ideal attack platform because the server was exposed both internally and externally. Moreover, because OWA authentication is based on domain credentials, whoever gains access to the OWA server becomes the owner of the entire organization’s domain credentials. [Emphasis theirs.]
Outlook may be boring and corporate, but that's exactly what makes it a perfect target for a persistent attack over a long period of time: Tons of high-profile companies use it. Cybereason is just presenting one case study, but it's not unreasonable to think that such an effective attack is already in use against other organizations as well, or will be. Companies that use a third-party credential manager (for example, Slate uses Okta) are probably not vulnerable to this attack. I reached out to Microsoft for comment and will update with any response.
Update, October 6, 2015, 11 a.m.: A Microsoft spokesperson says, “The report conveniently skips over the important details of how an attacker might 'gain a foothold into a highly strategic asset' if a system is properly managed, secured, and up to date. For all types of critical servers and applications, we recommend IT administrators use the latest products and services, in combination with industry best practices for IT management.” Of course it probably wouldn't be in Microsoft's interest for Cybereason to publicly disclose that, but the company seems to be hoping that the attack exploits a vulnerability that was previously patched.
A Drone, a Phone, an Attack Zone: Printer Hack
You might think that working on a secured floor in a 30-story office tower puts you out of reach of Wi-Fi hackers out to steal your confidential documents.
But researchers in Singapore have demonstrated how attackers using a drone plus a mobile phone could easily intercept documents sent to a seemingly inaccessible Wi-Fi printer. The method they devised is actually intended to help organizations determine cheaply and easily if they have vulnerable open Wi-Fi devices that can be accessed from the sky. But the same technique could also be used by corporate spies intent on economic espionage.
The drone is simply the transport used to ferry a mobile phone that contains two different apps the researchers designed. One, which they call Cybersecurity Patrol, detects open Wi-Fi printers and can be used for defensive purposes to uncover vulnerable devices and notify organizations that they’re open to attack. The second app performs the same detection activity, but for purposes of attack. Once it detects an open wireless printer, the app uses the phone to establish a fake access point that mimics the printer and intercept documents intended for the real device.
“In Singapore … there are many skyscrapers, and it would be very difficult to get to the 30th floor with your notebook [if there is no] physical access,” says Yuval Elovici, head of iTrust, a cybersecurity research center at the Singapore University of Technology and Design. “A drone can do it easily. This is the main point of the research, closing the physical gap with [a] drone in order to launch the attack or scan easily all the organization [for vulnerable devices].”
Student researchers Jinghui Toh and Hatib Muhammad developed the method under the guidance of Elovici as part of a government-sponsored cybersecurity defense project. They focused on wireless printers as their target because they say these are often an overlooked weak spot in offices. Many Wi-Fi printers come with the Wi-Fi connection open by default, and companies forget that this can be a method for outsiders to steal data.
For their demo they use a standard drone from the Chinese firm DJI and a Samsung phone. Their smartphone app searches for open printer SSIDs and company SSIDs. From the SSIDs, the app can identify the name of the company they’re scanning as well as the printer model. It then poses as the printer and forces any nearby computers to connect to it instead of the real printer. Once a document is intercepted, which takes just seconds, the app can send it to an attacker’s Dropbox account using the phone’s 3G or 4G connection and also send it on to the real printer so a victim wouldn’t know the document had been intercepted.
The attack zone is limited to 26 meters in radius. But with dedicated hardware, an attacker could generate a signal that is significantly stronger and extend that range further, Elovici notes. Any computer inside the attack zone will opt to connect to the fake printer over the real one, even if the real printer is closer in proximity to the rogue one.
A drone hovering outside an office building isn’t likely to be missed, so using this method for an attack has obvious downsides. But the aim of their research was to show primarily that adversaries themselves don’t need to be positioned close to a Wi-Fi device to steal data. A hacker could be controlling a drone from half a mile away or, in the case of autonomous drones, be nowhere near the building at all.
As for how close the drone would need to be to do the initial scan to detect vulnerable devices in a building, that depends on the specific printer, or other device’s, Wi-Fi signal. Typically the range of a printer is about 30 meters, Elovici notes.
Turning their mobile phone into a fake printer was not trivial, however.
After purchasing an HP6830 printer, they reverse engineered the protocol the printer used to communicate with computers sending it documents. Then they rooted a Samsung phone to install the Debian operating system on it. For the app, they wrote some Python code that simulates the HP printer.
Any organizations that are more interested in uncovering vulnerable devices than attacking them can simply install the Cybersecurity Patrol app on a phone and attach it to a drone to scan their buildings for unsecured printers and other wireless devices. A drone isn’t essential for this, however. As the researchers show in their demo video, a phone containing their app can also be attached to a robot vacuum cleaner and set loose inside an office to scan for vulnerable devices as it cleans a company’s floors.
“The main point [of the research] was to develop a mechanism to try to patrol the perimeter of the organization and find open printers from outside the organization,” Elovici says. “It’s dramatically cheaper than a conventional pen test.”
Also in Wired: