The Next iOS Update Has a Feature to Prevent Cops From Searching Your iPhone
The iPhone’s Touch ID fingerprint unlocking is one of the most intuitive security features in consumer technology. Strong passwords are hard to remember and annoying to type. And biometric security, like fingerprint IDs, is great for keeping things locked down. But like anything, Touch ID is really great until it isn’t. Your fingerprints are, after all, readily available, and it’s not that hard for someone to force you to press a button to unlock your phone, which, let’s face it, is probably packed with all kinds of private information, like credit card numbers, search histories, or clandestine texts.
The good news is that Apple’s next iPhone update, slated to be released this fall, will come with a new feature that lets users quickly disable Touch ID as a way to unlock the phone. In the new iOS 11, if you quickly tap your home button five times, the phone reverts to a password-only screen lock (with an option to dial 911 if needed). Reverting to a password-only mode offers an extra layer of privacy protection from a police officer, or an abusive partner, or anyone else who may force or coerce a person into touching their iPhone to unlock it. The new feature was discovered by someone on Twitter who installed the beta version of the soon-to-be-released iPhone update.
While police are technically required by the Supreme Court to get a warrant to search your phone if it’s locked—in the same way they need to get a warrant to search your house—some courts have ruled that law enforcement can force you to use your fingerprint to unlock a phone. But cops can’t force you to reveal your password. The idea is that police can make you turn over something you have (a fingerprint, a driver’s license) but not something you know (like a passcode).
iOS 11 is a game-changer for Touch ID. Press power button rapidly 5 times and it opens the 2nd screen, but it also forces passphrase entry! pic.twitter.com/uvWbM04lyk— Kia☆ (@alt_kia) August 17, 2017
The new feature should be of particular interest to protesters or anyone else who would rather the police not read all their contacts and text messages. Currently, you have to navigate through settings and multiple screens to change how your phone is secured—which is difficult to do rapidly if you sense an impending arrest. With the update, if you think you might get approached by law enforcement, you can just reach into your pocket and tap the button five times.
The new iPhone update is also expected to include a new face-recognition unlockingfeature. That makes this screen-lock shortcut even more important, considering how easy it would be for someone to confiscate your phone and hold it up to your face.
There are a lot of reasons to protest these days, but this is also useful in other circumstances. For instance, searches of mobile phones by border agents have skyrocketed in recent years. According to data from the Department of Homeland Security reported by NBC, in 2015 there were fewer than 5,000 cases of cellphone searches by border agents. But in 2016, that number grew to nearly 25,000. DHS reportedly searched 5,000 phones in February of this year alone.
To be sure, in some cases, federal law enforcement may demand your password anyway, especially at the border. But having that extra layer of protection is a welcome move.
It’s also in keeping with Apple’s philosophy. In 2016, the company fought a court order compelling it to break the encryption on an iPhone in the course of the San Bernardino terrorist investigation. At that time, the company was lauded as a champion of civil liberties to the deep annoyance of the FBI, and it’s hard to imagine law enforcement will to be too thrilled about this new feature, either. (The FBI eventually found a third-party company to break into the iPhone in question.)
So don’t sleep on the new iPhone update. Whether it’s protecting your data from warrantless searches or your private information from nefarious hackers, it’s super important to have all latest security features on deck. Plus the new face-unlocking feature looks pretty cool, too.
Facebook's Bar for Banning Speech Seems to Get a Lot Lower When Its Users Insult Mark Zuckerberg
Like a handful of major tech companies, Facebook has spent much of the past week removing content from white supremacists and neo-Nazis following the Unite the Right rally in Charlottesville, Virginia. Its targets included the rally’s main event page, as well as other related hate-group pages, including White Nationalists United, Right Wing Death Squad, Genuine Donald Trump, and others. Removing those pages was the right move, especially if the groups were using Facebook to promote violence.
But neo-Nazi pages weren’t the only thing Facebook banned last weekend. On Sunday, the Facebook page of a conservative, Los Angeles-based street artist named Sabo was taken down for using hate speech, too, according to a tweet from the artist following the removal of his page. But in this case, the timing of his Facebook suspension is curious.
The week before Sabo lost his Facebook privileges, the artist hung posters in a number of California cities that read “Fuck Zuck 2020,” pictures of which he posted on his Facebook page. The posters were an obvious play on persistent speculation that Facebook CEO Mark Zuckerberg has political ambitions following his recent Harvard commencement speech and 2017 tour across the U.S., during which the young executive has been trying to understand people who live in parts of the country he’s less familiar with. Zuckerberg even hired a former Clinton pollster earlier this month to advise his philanthropic work.
Sabo, to be clear, is known for making incredibly offensive art, often intentionally using racist and sexist imagery that is hateful, hurtful, and at times in clear violation of Facebook’s community policies. But the “Fuck Zuck” posters don’t necessarily fit that bill, even if they were likely offensive to the company’s CEO.
Sabo says that the note he received from Facebook said, “While we allow individuals to speak freely on Facebook, we take action on verbal abuse directed at individuals,” but the company didn’t include a direct reference to the offending post or posts in question.
Though Sabo been booted from Facebook in the past for other reasons, if this time the censorship was sparked by his most recent anti-Zuckerberg posters, then Facebook has some explaining to do. After all, the website has let blatantly anti-Semitic websites be classified as news, like the Daily Stormer, which was the platform used by neo-Nazis to organize the rally that ended in violence this weekend in Charlottesville. Facebook’s algorithm maybe even gave a massive boost to a Daily Stormer article mocking Heather Heyer, the counterprotester who was killed by a rally attendee who drove his car into a crowd Saturday afternoon. Now Facebook is deleting links to the Daily Stormer’s article, but only after the post was already shared at least 65,000 times.
Last Friday, Sabo also hung posters around Google and YouTube offices in Los Angeles in protest of Google’s firing of the software engineer James Damore, who wrote the viral memo that claimed gender disparities in technical and leadership roles at the company are due to biological differences between men and women. The Google-specific posters featured a photo of Apple CEO Tim Cook with the caption “Think Different” next to a photo of Google CEO Sundar Pichai that read, “Not So Much.”
While both the Zuck 2020 posters and the anti-Google posters are without a doubt offensive to some, neither really constitutes hate speech. The posters don’t appear make anyone unsafe, nor do they denounce or attack a group based on their ethnicity, race, religious affiliation, gender, or any of the other reasons Facebook lists in its community standards page on hate speech.
Again, Sabo has certainly made racist and horribly offensive art before, like posters mocking the Black Lives Matter movement that looked like movie ads for Planet of the Apes. Yet when those posters went up around the end of July and photos were posted on Facebook, Sabo says his page was not removed at that time.
One main problem here is that Facebook isn’t clarifying what the offensive content in question was. (The company also didn’t respond to a request for comment.) Rather, Facebook appears to be arguing that it acts on “verbal abuse directed at individuals,” which sure feels like a reaction to the “Fuck Zuck” posters.
Facebook hasn’t always been the most consistent when policing hate speech on its platform, nor is it clear what exactly does and does not count as speech that might get a user banned, which has led to uneven censorship. In June, ProPublica released a report on Facebook’s content moderation strategies that detailed internal documents describing how white men are a category of users who get special protection from hate speech, but black children are not. These distorted polices were never made public by Facebook, and users often don’t know when they’re saying something that could get them suspended. If the social media company were more transparent about how it classifies hate speech, not only would users be more aware of the rules of the road, but public scrutiny would likely prevent it from holding such lopsided polices in the first place. All of which is illustrative of why being crystal-clear about how the social media giant policies content is just as important as being proactive about banning hate speech and protecting user safety in the first place.
Cloudflare's CEO Is Right: We Can't Count on Him to Police the Internet
Earlier this week, I wrote that Charlottesville could mark an inflection point in the battle over online speech. Not only were social media platforms suddenly getting serious about cracking down on the racist “alt-right,” but back-end web infrastructure companies—which have typically pled neutrality with regard to the content of the sites they serve—suddenly found themselves under intense pressure to do the same. First, the domain registrar GoDaddy dropped the neo-Nazi Daily Stormer; Google Domains and others quickly followed suit.
But there remained one notable holdout: Cloudflare, a server company that specializes in protecting sites against DDoS hacks, was still serving the Daily Stormer—insisting, as it has in the past when challenged to defend controversial clients, that policing online speech is not and should not be its job.
That changed on Wednesday, when Cloudflare CEO Matthew Prince woke up “in a bad mood” and decided to pull the plug on the Daily Stormer. His memo to employees, published in full by Gizmodo, dripped with bitter ambivalence. Here’s an excerpt (italics mine):
Let me be clear: this was an arbitrary decision. It was different than what I’d talked talked with our senior team about yesterday. I woke up this morning in a bad mood and decided to kick them off the Internet. … It was a decision I could make because I’m the CEO of a major Internet infrastructure company.
Having made that decision, we now need to talk about why it is so dangerous. I’ll be posting something on our blog later today. Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.
Prince followed up with an official Cloudflare blog post further explaining his sudden change of heart. His “bad mood,” it seems, had been stoked by the Daily Stormer’s boasting that Cloudflare secretly supported its racist ideology. The post went on to argue, forcefully and in detail, that a system in which a company such as Cloudflare can make such a decision on a whim is a flawed one. And while Prince expressed no regret about pulling the plug on the Daily Stormer specifically, he worried that in doing so, he had opened a door that would have been better left shut. He wrote: “After today, make no mistake, it will be a little bit harder for us to argue against a government somewhere pressuring us into taking down a site they don't like.”
That’s probably true, and it echoes the concerns raised by high-tech law expert Eric Goldman in my Slate story and by the Electronic Frontier Foundation’s Nate Cardozo in a story by The Verge’s Russell Brandom.
That said, anyone who fears a slippery slope toward corporate censorship of the web can take at least some comfort in the way Cloudflare communicated its decision. While there’s no guarantee that the CEO of such a company will regard its own huge power over web companies with due awe, suspicion, and fear, it’s reassuring to know that Prince, for one, does. Perhaps the operative metaphor here is not a slippery slope, but a high bar: one that only a group as unambiguously disgusting and evil as neo-Nazis could clear.
But that seems a little naïve. Already, calls are growing for Cloudflare, GoDaddy, and other web-infrastructure firms to ban a slew of other groups affiliated with the white supremacist movement. Does anyone doubt that conservative pressure groups will gleefully adopt the same tactic against left-wing targets?
Cloudflare may seem like a small part of what makes the web run, but it isn’t: The company says it handles something on the order of 10 percent of all internet requests. That actually understates its influence, because Cloudflare is by far the market leader in DDoS protection, and its clients tend to be those most vulnerable to such attacks. Posting controversial content on the web without Cloudflare’s protection is like strutting out onto a battlefield naked with a target painted on your back. There are a handful other other, mostly very large, companies that play similarly critical roles in maintaining the modern internet. As Prince put it: “Without a clear framework as a guide for content regulation, a small number of companies will largely determine what can and cannot be online.”
Prince’s call for such a framework is probably the most important part of his memo. As Goldman pointed out to me, the problem with companies such as GoDaddy, Google, and Cloudflare dropping the Daily Stormer was not that the Daily Stormer deserves to have its vile viewpoints heard. Rather, the problem is that the decision was made on an ad hoc basis, with GoDaddy and Google disingenuously holding up their terms of service as a fig leaf.
The reality is that few, if any, of these companies, have ever thought seriously about those terms of service or enforced them consistently. Companies such as Google, Facebook, and Twitter have been thinking through these issues and refining their policies for over a decade, and they still get big decisions appallingly wrong on a frequent basis. If you think they’re bad at distinguishing between legitimate and illegitimate content, imagine how ham-fisted a company like GoDaddy is likely to be—especially given that the only punishment at its disposal has been compared to the Internet’s version of the death penalty. And the notion that we can count on the free market to supply alternatives to overzealous service providers is undermined by the industry's huge barriers to entry. You can’t just go out and start an “indie” Cloudflare, because only a sprawling global network of servers could do what it does.
That the internet has made it this far depending on infrastructure built and maintained by unaccountable, largely unregulated, private corporations is something of a miracle. But the alarming recent rise in explicit online hate, intimidation, and organized racism and violence in the United States—and the corresponding rise in public awareness of it—has brought the system’s underlying flaws into sharp relief. Perhaps we do need companies like GoDaddy and Cloudflare to take a more active role in deciding what should be allowed on the Internet. But if so, we also urgently need them to develop some ground rules for those decisions that go beyond “we’ll enforce our terms of service when we feel like it.”
The Damage Caused by the 93-Day Internet Blackout in Cameroon
Want to listen to this article out loud? Hear it on Slate Voice.
On Jan. 17, the internet went out in Bamenda, the English-speaking city where I live in the Northwest region of Cameroon.
The internet shutdown affected the Anglophone regions of Cameroon—approximately one-third of the population of the country. Meanwhile, Francophone Cameroonians continued to enjoy internet access. Why? The government claimed that Anglophone Cameroonians were using social media to spread rumors, fuel anti-government protests, and threaten national unity.
Just a day before services disappeared, the Ministry of Posts and Telecommunications issued a statement that warned social media users of criminal penalties if they were to "issue or spread information, including by way of electronic communications or information technology systems, without any evidence." The statement also confirmed that the authorities had sent text messages direct to mobile phone subscribers, notifying them of penalties, including long jail terms, for "spreading false news" via social media.
For me, this shutdown was devastating. I’m a STEM advocate and a tech instructor for girls ages 10 to 18. I run after-school and holiday programs, where I teach girls hands-on digital skills at the Center for Youth Education and Economic Development (CYEED). My goal is to share my passion for STEM and to inspire them early on with the help of mentors, so they will stay engaged throughout their school year and their lives. In Cameroon, less than 30 percent of students in STEM programs are girls. My dream is to narrow the gender tech divide, so our women can have equal access to the economic opportunities STEM careers can provide. I am on a mission to eradicate the stereotypes about STEM education being for boys.
Naturally, the internet is one of the tools I use on a daily basis to access free online resources to train girls. I also belong to social online platforms where I gain skills and training. It is not an exaggeration to say that when I discovered that I could not connect to the internet, it was one of the most horrifying issues I have faced in life. I could not believe that it would now be impossible to express myself to the world, reach emergency services, or communicate with my family.
The nearest strong and reliable internet connection was three hours away on a very bad road, in a nearby French-speaking region. Every weekend, I packed my little backpack, bought a bus ticket, and traveled to the town of Baffoussam. It was the only way I could keep my work alive.
During the blackout, I was working on two main projects that required internet access. One was the Technovation Challenge, an international project to bring girls into STEM. As a regional ambassador for this program, I recruit and train girls to build mobile Android apps that will address problems in their community, using MIT App Inventor software. I estimate that because of the internet blackout, about 200 girls in Bamenda lost the ability to take part in the Technovation Challenge. That privilege was now reserved only to the girls from French-speaking regions of Cameroon.
The girls who could not take part have kept on contacting me to seek solutions. Many of them had taken part in the year before and were eager to do it again. But it was too expensive to move them to Baffoussam for training. It was a lost opportunity.
My second project at the time was the World Pulse Advanced Digital Changemaking program. In this project, I was not a leader, but a participant. World Pulse is a powerful digital platform uniting people from around the world who strive to speak out and build solutions to today’s biggest challenges. It empowers these leaders by advancing their digital skills and leadership, empowering them to mobilize others and create real social transformation.
I had only been in the program training for three weeks when the internet was shut down. I needed internet to access training material, do my assignments, run live calls in virtual classrooms with classmates, and have Skype calls with my assigned mentor, who helped me craft my vision and transform my project in to a reality.
I made up my mind to complete this training despite the internet blackout. It was a nightmare, very frustrating and painful, but I kept forging ahead while keeping a positive attitude. Thanks to my trips to Baffoussam, I was able to complete my online training after three months. In a competition between 30 women around the globe, my project titled “Bring a Girl to STEM” came first. Recently I was awarded as Featured Impact Leader for World Pulse. I will receive mentoring and financial assistance for my project that aims to attract and retain girls in the STEM fields. Had I not been able to pay for the bus trips to Baffoussam, I would have lost out on this wonderful opportunity.
The situation was even more serious for businesspeople, such as my co-worker who runs a big cyber cafe. Workers lost their jobs, and many businesses that rely on internet have also been shut down. People were left unable to support their families.
Finally, after 93 days, the internet was restored on April 20. All of us were happy to reconnect, but we will never forget the bad experience and all the opportunities lost. And it could happen again: The government statement announcing the restoration of the internet included a warning that the government reserved the right to kick people offline again if they “misused” it.
This threat is frightening to me. I call on the government of Cameroon to keep the internet on forever. The internet is a great tool for the development of a country. Furthermore, access to internet is a human right. It should be made accessible and affordable for all. I envision a country where internet is free for everyone to use. I envision a society where everyone is trained on internet best practices and productive usage—especially women and girls.
Meet Mappy, a Software System That Automatically Maps Old-School Nintendo Games
Many of the classic Nintendo Entertainment System games are marvels of level design. The introductory moments of Super Mario Bros., for example, famously teach players to search for magic mushrooms by making it difficult to avoid the first one they encounter. In the decades since their initial release, those titles have been extensively explored, their every secret unveiled by avid enthusiasts.
Nevertheless, it has remained difficult to make clear, simple maps of the games and their worlds. As Joseph Osborn, a Ph.D. student in computer science at the University of California–Santa Cruz told me, the conventional method has remained largely unchanged for years.
“Traditionally, game maps are made by taking a lot of screenshots and pasting them together,” Osborn said. “Even in early video game strategy guides or things like that, you’d have somebody with a capture card or a camera set up in front of a TV, and they would have to take pictures multiple times in the level and stitch it together by hand.”
In collaboration with his colleagues Adam Summerville and Michael Mateas, Osborn set out to find an alternative to that labor-intensive and error-prone process. Their answer comes in the form of a software system called Mappy that they describe in a paper available now on the scholarly preprint service arXiv.
In essence, Mappy (not to be confused with the game of the same name) autonomously generates maps of a game’s levels (or, in some cases, its whole world), suturing together long scrolling screens and figuring out how distinct rooms connect to one another. “At a high level, what we do is look at what’s visible on screen at any given time. We record what the player could be seeing, and automatically stitch it together into a large map,” Osborn says. The process allows them to generate images that display the total makeup of a room, even as it changes in response to the player’s actions or other on-screen events.
Mappy is not a fully autonomous system, in that it doesn’t figure out how to actually navigate the game on its own. Osborn and his collaborators first have to feed it information from a playthrough, including data about which buttons were being pushed at any given time. But, as Osborn explains, “During Mappy’s execution, it’s automatically pressing the buttons” based on that information. It even has rules in place to make sure that what it’s seeing is actually part of the playable world and not, for example, a separate cut scene.
Though the process is made easier by the relatively simplicity of the NES’s graphics hardware, Mappy’s work is still difficult. Osborn and his co-authors write, “Mapping out a minute of play (3600 frames) takes between five and six minutes, mainly due to the expensive scrolling detection and game object tracking.” Some of those challenges would likely be amplified if they attempted to apply their method to a more complex console such as the Super Nintendo Entertainment System, which employs graphical layering to generate effects such as fog drifting by in the foreground. Nevertheless, they argue that similar techniques should still be feasible on other game systems.
Osborn imagines an array of possible applications for Mappy. For one, he suggests, it might help those who are experimenting with procedurally generated game levels, giving them a consistent dataset from which to train machine learning algorithms that could produce playable worlds of their own. He also proposes that it might empower a kind of search engine that lets you look up a specific section of a game and then leap directly to that moment, letting you play through your memories.
Whether or not Mappy gets us there, it already stands as a charming and impressive accomplishment. As it demonstrates, modern computer science still has a great deal to teach us about the cartography of our digital past.
Future Tense Newsletter: How Doxxing, Data, and DNA Are Disrupting Our Future
Greetings, Future Tensers,
Technology is changing the way we protest, and that includes how the government reacts to it. On Saturday, the Department of Justice requested a warrant to force protest group #DisruptJ20 to turn over data that would reveal essentially anyone who has visited the website of the group responsible for many protests against Donald Trump’s inauguration. Jacob Brogan writes about how DreamHost, the company that hosts DisruptJ20’s site, is resisting the effort and what this says about Trump’s scary record of collecting data on those in opposition to him.
In the U.K., lawmakers are considering a law that would ban the identification of individuals from anonymized data. That may sound great, but cracking anonymous data can be essential to important security research, explains Nick Thieme. And data security isn’t the only threat to our technological well-being. April Glaser spoke with researchers at University of Washington about how they were able to encode malware into DNA and what potential dangers it could lead to for hospitals and research centers.
- Still a bad idea: If everyone knows blackface is a bad idea, why are companies still creating apps to allow people to change their skin color? April Glaser begs developers to “knock if off.”
- Book smart: Hardcover textbooks are costing underfunded school districts millions of dollars. E-books could provide a cost-saving answer for both students and teachers, writes Lindsey Tepe.
- Ready, set, sew: If you think video games and crafts are on opposite sides of the recreational spectrum, this new game is ready to prove you wrong. Grace Ballenger reports on how new computer interfaces, such as looms, can make gaming more widely accessible.
- Generation ¯\_(ツ)_/¯: Lisa Guernsey explains why parents shouldn’t be panicked by a recent viral story about how smartphones are changing the lives of teens.
- E-market eclipse: Excited for the eclipse? Buy your glasses online? Amazon is offering refunds for some eclipse-viewing products—though it isn’t clear whether they’re all faulty.
For Future Tense
The Department of Justice Demands Records on Every Visit to Anti-Trump Protest Site DisruptJ20
If you’ve visited the website DisruptJ20, which helped organize protests during the inauguration of Donald Trump, the Department of Justice is interested in learning more about you.
On Saturday, a judge in the Superior Court of the District of Columbia approved a search warrant that would require DreamHost, DisruptJ20’s provider, to turn over a wide range of information about the site and its visitors. In addition to information about the site’s creators, the DOJ demands “logs showing connections related to the website, and any other transactional information, including records of session times and duration.” In short, the government is looking for records of everyone who even visited the site, which is to say it's effectively compiling info on those who showed even a modicum of interest in protesting the administration.
DreamHost is resisting the effort. In a blog post, the company acknowledges that it has “no insight into the affidavit for the search warrant (those records are sealed).” Nevertheless, DreamHost also writes that its general counsel “has taken issue with this particular search warrant for being a highly untargeted demand that chills free association and the right of free speech afforded by the Constitution.” As it goes on to explain, turning over the requested records would mean providing 1.3 million visitor IP addresses along with “contact information, email content, and photos of thousands of people.”
As ZDNet notes, “Several purported members of [DisruptJ20] were arrested for alleged violent conduct during the protests.” It links to a Washington Post article from January that claims, “Police said in court filings that the damage caused by the group was in excess of $100,000.”
In advance of the inauguration itself, however, the organizers claimed on their site laid out sweeping, but still legal, goals. “We’re planning a series of massive direct actions that will shut down the Inauguration ceremonies and any related celebrations–the Inaugural parade, the Inaugural balls, you name it,” they claimed.
DreamHost’s blog post stresses that those who came to the site in search of information about such activities had every right to do so, just as they had every right to protest the inauguration. As such, it’s not clear why the DOJ would need their IP addresses and other related data. “That information could be used to identify any individuals who used this site to exercise and express political speech protected under the Constitution’s First Amendment,” the post reads. “This is, in our opinion, a strong example of investigatory overreach and a clear abuse of government authority.”
This is not, of course, the first time that the Trump administration has sought sweeping information about citizens. In late June, the DOJ demanded massive amounts of voter registration data, information that many states refused to provide. While that may have been part of an ongoing effort to purge voter rolls, this new warrant is troubling in part because it suggests the Trump administration is also actively gathering records about its opponents.
We’ll likely know more after a hearing about the request, currently scheduled for Friday, Aug. 18 in Washington.
A Proposed Anti-Doxxing Law in the U.K. Could Make Personal Data Less Secure
Personal medical information from 1 in every 4 Americans has been stolen. On average, there were three data breaches in the U.S. every day during 2016. People outed by trails of left-behind data have taken their own lives. “Better” outcomes of doxxing include relentless abuse and death threats.
Against this backdrop comes the United Kingdom’s wise move toward new data protection laws. As part of this process, a proposed law would ban “intentionally or recklessly re-identifying individuals from anonymised or pseudonymised data.” Digital Minister Matt Hancock told the Guardian the law, if implemented, will “give people more control over their data, [and] require more consent for its use.” This shift recognizes that the threat of doxxing can chill your comfortable internet browsing. But, counterintuitively, it also makes your data less secure.
Whether your data can be truly anonymous is up for debate. But “anonymous data” usually refers to information that cannot be associated with the person who generated it. This kind of data is vital to scientific research. When I conducted cancer research, I used free, open-access, genetic data from real people with real diseases. Having the same, constantly updated, genetic data freely available to all scientists creates a baseline, which can help verify results, allow researchers to avoid echo chambers, and aid reproducibility.
The issue is that “anonymous” data can often be de-anonymized. In 2006, Netflix released the data of 500,000 customers in the interest of crowdsourcing improvements for their prediction algorithm. As was the standard at the time, they removed all personally identifiable information from the data, releasing only customers’ movie ratings, thinking this would keep their customers’ identities hidden. They were wrong. Arvind Narayanan and Vitali Shmatikov, researchers from the University of Texas at Austin, compared movie ratings from the Netflix dataset with publicly available IMDB data, and because movie ratings are very personal—I can’t imagine anyone other than me 5-starring The Man With the Iron Fists, Hunter x Hunter, and My Cousin Vinny—they were able to match names of IMDB users with Netflix accounts.
This research demonstrates two things: 1) Anonymous data often isn’t, and 2) it can be critically important for researchers to—as the U.K. might put it—“intentionally … re-identify individuals from anonymised or pseudonymised data.” Researchers need the ability to break privacy systems. When the options are a good guy picking your lock to convince you it’s broken, or a bad guy picking your lock to steal your passport, the choice is clear. The analysis from the University of Texas at Austin led to lawsuits against Netflix. More importantly, it warned the entire data industry that the privacy methods Netflix was using at the time were insufficient and should no longer be used.
The U.K.’s Information Commissioner’s Office anonymization code of practice considers data “anonymized” if it is not “reasonably likely” to be traced back to an individual. “Reasonably likely” isn’t well defined, but if only one research team in the world can de-anonymize the data, the data probably falls under their definition of anonymous. Under this definition, and the newly proposed laws, the Texas researchers would have committed a crime punishable by an apparently unlimited fine. Shmatikov, who is now a professor at Cornell University, views the U.K.’s proposed law as perfectly wrong. He told me that the kind of research that will keep people safe is “exactly the kind of activity that [the U.K.] is trying to penalize.” He later said he would not have conducted his research if it were penalized by this sort of law.
To be clear, I do not unequivocally support all research that seeks to break security systems. The leadership from the unnamed company that sold the FBI the software used break into the San Bernardino shooter’s iPhone should be tarred, feathered, and marched down a busy street by a matronly nun ringing a bell. It didn’t require the FBI to release the details of that security flaw, so the exploit might be sitting in your pocket right now, waiting to be abused. A white-hat hacker always releases his or her secrets.
But laws are not the cure for doxxing. The ongoing research of white-hat researchers like Shmatikov will not stop people from invading each other’s privacy, but it will keep all of our data safer.
And the new laws do offer protection for research in other related areas. For instance, citizens may be barred from having their data removed from university datasets if the institution can argue the data is “essential to the research.” It would be extremely easy to create a similar research exception in the deanonymization statutes. Now, Shmatikov doesn’t see this as a perfect solution. In his eyes, good security research can be done by people outside academia. He also believes the harm from deanonymization occurs when personal information is shared across the internet, not when someone is deanonymized.
Still, the ICO has two options. It could add an exception to the deanonymization laws for researchers who sequester the data they deanonymize, whether the researchers are academic or not. Or, it could penalize the sharing of deanonymized data, rather than its creation. (Remember, the U.K. doesn’t have the First Amendment.) Both paths would disincentivize internet hordes from making the private public, without handcuffing researchers from improving the technology that will actually help.
New Fabric Interfaces Weave Together Textiles and Computers in Unexpected Ways
Imagine sitting in front of a “Choose Your Own Adventure” tale. It’s just like the book format that you might remember from childhood, but instead of being on a page, it unfolds on the screen. You read through several paragraphs of text, and are presented with four colorful options to shift the direction of the story. Here’s the magical part: You select your preferred option by passing a certain color thread through a loom. That’s right—this game is loom-controlled.
When most people think of interacting with computers, they think of traditional interfaces, like a computer mouse, keyboard, or cellphone. However, as computer capabilities evolve, so do the variety of interfaces. One intriguing and nontraditional idea is to use fabric-based interfaces for fun or to achieve a goal. Because all of us are familiar with fabric as a material, the systems also aim to be more inclusive.
One version of a cloth-based computer interface is a video game system called Loominary, which uses a tabletop loom as an interface to weave a scarf.
WannaCry Hero Pleads Not Guilty to Malware Charges
On Monday, cybersecurity researcher Marcus Hutchins, better known by the nom-de-keyboard MalwareTech, pleaded not guilty to creating and distributing malware, Motherboard reports.
As April Glaser has previously explained in Slate, Hutchins rose to international prominence after he helped stop the WannaCry ransomware attack earlier this year. Accordingly, it came as a shock to many when he was arrested in August for his alleged contributions to a banking Trojan called Kronos, a piece of banking malware seemingly unrelated to WannaCry.
According to Motherboard, “[T]he prosecution said that Hutchins had admitted ‘that he was the author of the code that became the Kronos malware’ when he spoke to FBI agents” in an earlier hearing. Kronos, which first appeared in mid-2014 and reportedly sold for $7,000, primarily targeted banks in the United Kingdom and other countries, leading some, Motherboard writes, to ask why “a British researcher being indicted in the United States for a malware that apparently had no American victims.”
Even if Hutchins did contribute to the Kronos code, as prosecutors allege, it’s still not clear what, if any, evidence they have that he helped market it. The Guardian cites Jake Williams, a cyberscurity researcher who suggests it’s unlikely that Hutchins would have done so, since he refused payment for a legitimate project they worked on together around the time Kronos was active. “I have a hard time picturing him refusing money for work from me but at the same time taking money for illegal activities,” Williams tells the Guardian. More recently, as the paper also notes, Hutchins donated reward money that he received for helping shut down WannaCry.
For now, at least, Hutchins is out on bail and will, Motherboard reports, “be allowed full internet access so he can continue to work as a security researcher.” His trial is scheduled for October.