Future Tense
The Citizen's Guide to the Future

Sept. 22 2017 2:15 PM

Prosecutors Lied About a Chemist Who Tainted 18,000 Convictions. Time to Overturn Them All.  

On Jan. 9, 2012, Sonja Farak—a chemist at a crime lab in Amherst, Massachusetts—pilfered a sample of crack cocaine and smoked it in the bathroom throughout the morning. Then, before lunch, she stole and ingested some LSD. Farak later admitted that she was “too impaired to drive home” or use the laboratory equipment.

That same day, Farak certified that a substance delivered to the lab was heroin. The substance had been found on Rolando Penate, whom prosecutors charged with selling heroin, citing Farak’s certification. Penate served five years, seven months, and 12 days in prison.

It wasn’t an isolated case. Farak began using drugs from the lab in 2005 and continued to do so until she was arrested in 2013. All told, roughly 18,000 convictions were tainted by certifications that Farak issued while under the influence. Now the ACLU is asking Massachusetts’ Supreme Judicial Court to vacate these convictions. All of them. With prejudice. It’s an extraordinary, sweeping remedy. The court should grant it.

Farak isn’t the only crime lab chemist in Massachusetts who engaged in criminal behavior at work: Over her nine-year career, Annie Dookhan falsified tens of thousands of reports, frequently certifying results without testing the substance. Dookhan’s misconduct tainted about 24,000 convictions—but the SJC refused to dismiss all of them at once, explaining that such “strong medicine … should be prescribed only when the government misconduct is so intentional and so egregious that a new trial is not an adequate remedy.” Instead, the court created a protocol through which “Dookhan defendants” could obtain relief. Ultimately, the protocol yielded the dismissal, with prejudice, of 21,839 wrongful convictions.

Sept. 21 2017 4:50 PM

Equinox Has Had a Rough Couple Weeks on Twitter, Thanks to Autocorrect

A word’s autocorrect sibling is like its black swan, the evil twin that shows up unbidden whenever you want the other word. It’s the ducking worst. Recently the luxury gym company Equinox has been forced to reckon with its autocorrect sibling: the similarly prefixed but entirely different company Equifax.

Equifax has not been having a good month. On Sept. 7, the credit reporting agency announced a breach that exposed the sensitive data of 143 million Americans. Customers have been understandably frustrated ever since, jamming Equifax’s phone lines demanding answers. Social media users have also been translating their anger into tweets … many of which have been inadvertently directed at the ‘Nox instead of the ‘Fax. Thank the text software that automatically changes words it thinks are misspelled. (My phone didn’t make this particular autocorrect, actually, but congrats to Equinox for achieving this strange and modern level of notoriety in some tech companies’ spell-check dictionaries.)


Equinox can’t help that its destiny has now been linked with Equifax through the cruel voodoo of autocorrect. But it does have to deal with the fallout. And so the brand, which declined to comment for this story, has ever so gently been trying to push the message that its data was not part of the recent breach.

It’s not just on social media. If you do a Google News search for Equifax and Equinox, several news articles accidentally refer to Equifax as Equinox at one or more points, which is probably giving heart palpitations to the people monitoring Equinox’s Google Alerts.

Equinox is understandably eager to be excluded from this narrative. Wanting to distance itself from this situation makes sense—no company wants to be associated with such a major hack. And since Equinox is no stranger to controversy, having gotten negative attention for several ad campaigns over the years, the brand is probably pleased to be in the clear this time. But the gym chain is walking a careful line and not trumpeting its lack of involvement too loudly: Note that none of those tweets say, “It’s Equifax NOT Equinox” or throw any sort of shade at customers’ Autocorrect errors. That is some very cool and collected customer service. In addition to having had its own run-ins with the internet outrage machine, Equinox probably recognizes that the 143 million possibly affected by the breach must include at least a few Equinox members, and therefore a little sensitivity is warranted.

Still, an Equinox data breach sounds a lot more luxurious than an Equifax data breach. The gym hasn’t earned the nickname Chicquinox for nothing. As one Twitter user joked (that’s code for I am now going to steal his line), what would get breached, your gym locker combination?

But then you remember that Equinox has credit card numbers and probably a bunch of other sensitive information, too (pull-up rate, body-mass index?). Equinox members may belong to snooty gyms, but they still deserve secure data. Inasmuch as secure data is even possible anymore—just about everyone has been exposed to one of these breaches at one point or another. Equinox must know that it’s probably only a matter of time before a hacker gets his hands on those locker combinations. Until then, enjoy those Kiehl’s products.

Sept. 20 2017 6:02 PM

Sheryl Sandberg Says Facebook Is Overhauling Its Ad System That Allowed Anti-Semitic Targeting

Hey, look, journalism works! On Wednesday, Facebook Chief Operating Officer Sheryl Sandberg posted a lengthy note to her Facebook page announcing that the company has taken steps to remedy a situation pointed out last week by the nonprofit news outlet Pro Publica, which found that the social media giant lets advertisers target users interested in anti-Semitism and other hateful categories. The investigation found that someone could buy ads that would reach “Jew haters” or people interested in “how to burn Jews.” A follow-up post by Slate found even more categories, like “killing bitches,” “threesome rape,” and “killing Haji," could also be used to tailor ads to Facebook users.

Sandberg writes that Facebook is now clarifying its advertising and enforcement process to ensure that content that “directly attacks” people based on race, sex, gender, national origin, religion, sexual orientation, or disability can’t be used to target ads. Sandberg says that targeting of this kind has always been against Facebook’s polices; apparently those polices weren’t enforced in this area until now.


Facebook is adding more humans to review its automated ad system, Sandberg wrote, and is reinstating its “5,000 most commonly used targeting terms” that Facebook has deemed do not peddle hate speech, but it's not clear if these are the only terms that Facebook will allow advertisers use in targeting now. Finally, Facebook will build a new way for users to crowdsource complaints about their ads, like if someone believes ads are being targeted to people based on their race or religion. Of the last fix, Sandberg says similar methods have worked in other parts of Facebook and should be able to carry over to ads. It’s not obvious what she means here either—she uses the term “technical systems”— and I’ve asked Facebook to clarify Sandberg's statement. (I’ll update this post if they respond.)

It’s also unclear how Facebook plans to deal with ad targeting that doesn’t directly attack people but allows for racist or homophobic or sexist hateful stereotyping anyway. Facebook’s full metric for what constitutes hate speech isn’t public information. But documents obtained by Pro Publica earlier this summer reveal that the platform has used formulas that prohibit hate speech against “protected categories,” which include sex, gender identity, race, religion, national origin, serious disability or disease, and sexual orientation when it comes to content posted by users. Facebook is more permissive, however, when it comes to hate speech directed at subsets of these categories, like age, political ideology, appearance, social class, or occupation. Under that formula Facebook permitted hateful speech against black children, since it’s a subset of people, but not against white men. If Facebook is using the same standards that it uses on individual posts, then the company might allow ads that target people who dislike poor people, for example. Or an advertiser might be able to target users based on their hatred of people who are perceived as fat.

Adding more people to the mix might help, since humans are likely better at knowing when something is offensive and when something isn’t. But unless Facebook publishes clear guidelines that it enforces consistently, simply adding more staff to try to fix the anti-Semitism that its system condoned might not free Facebook of its ad-targeting dilemma.

Another thing that might help: Facebook should hire more diverse technologists. According to the company’s last diversity report, its technical staff is 81 percent male and 1 percent black. You have to wonder: If there were more women or under-represented minorities on Facebook’s engineering product teams, would this flawed ad tech have even seen the light of day?

Sept. 20 2017 4:58 PM

Netizen Report: Germany’s New Social Media Law Puts a Price on Hate Speech

new advox logo

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Afef Abrougui, Ellery Roberts Biddle, Mohamed ElGohary, Pauline Ratze, Elizabeth Rivera, and Sarah Myers West contributed to this report.

A new German law going into force in October will impose fines on social networks if they fail to remove “manifestly unlawful” hate speech within 24 hours of being posted. Under the Netzwerkdurchsetzungsgesetz, called the NetzDG for short, companies have up to seven days to consider the removal of more ambiguous material.


Germany’s criminal code already defines hate speech, so the law does not create new measures or definitions. Instead, it forces companies to police hate speech or face astronomical fines. The law is unprecedented at the global level and could have game-changing ripple effects worldwide.

The final draft of the law sets clear punishments for companies that fail to comply and places the burden of determining what messages, images, or videos count as hate speech on companies themselves. It also compels companies to create stronger mechanisms for transparency around their processes for taking down content. But it does not prescribe a legal mechanism to appeal the removal of material.

In an interview with BBC, an unnamed Facebook spokesperson said that the law “would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies.”

Even without the law in place, this responsibility already exists in many dimensions. Companies typically have full authority over users’ accounts and postings—when a user’s account is suspended or content is taken down, that person is often unable to access information about the decision or have direct contact with an actual company employee who can help to resolve disputes. The same is true for users who report abusive content or messages and receive no remedy.

On top of these concerns, there is consensus among the law’s critics that it will result in overcompliance—and thus, increased censorship—by companies eager to avoid fines. David Kaye, U.N. special rapporteur on freedom of expression, said of the law:

With these 24 hour and seven day deadlines—if you are a company you are going to want avoid fines and bad public branding of your platform. … If there is a complaint about a post you are just going to take it down. What is in it for you to leave it up? I think the result is likely to be greater censorship.

Rohingyas are being driven out of Myanmar—and off of Facebook
Rohingya activists say that their Facebook posts documenting what the U.N. now says is the ethnic cleansing of their people (a mostly Muslim ethnic minority group) are routinely being removed or their accounts suspended. This is particularly significant given the proliferation of anti-Rohingya propaganda online, and the mounting barriers to accessing accurate information about the conflict. These factors make Facebook and other social media platforms a critical space for spreading information about the conflict.

UAE court rules against Indian Facebook user who “insulted” the prophet
A United Arab Emirates court upheld the sentence of an Indian migrant worker who was sentenced to one year in prison for posting comments on Facebook that allegedly “disrespected” and “insulted” the Prophet Muhammad. The man claimed that hackers had posted these messages, but the court rejected that argument.

Iranian developers petition Apple to keep their apps online
Iranian app developers are mounting a petition against Apple for blocking their apps from the App Store. In a Change.org petition, a group of developers ask Apple CEO Tim Cook to “stop removing Iranian applications from [the] App Store and lift policies that are limiting our access to the products and services offered via Apple’s platforms.”

Multiple developers have reported that when they submit an app for review, they receive a message indicating that the App store “cannot host, distribute, or do business with apps or developers connected to certain U.S. embargoed countries.”

Apple began shutting down Iranian apps in August, the same month President Donald Trump signed a new sanctions bill into law, but it remains unclear whether the administration meant to impose new restrictions on technology companies. European companies lifted all sanctions against Iran after the negotiation of the 2016 nuclear agreement.

Sept. 20 2017 4:15 PM

Equifax Repeatedly Tweeted the Wrong URL for Its Website About the Data Breach

Equifax has suffered a critical decline in public trust over the last few weeks after security breaches exposed the private data of about 143 million people. The company’s Twitter account only made matters worse.


Sept. 20 2017 3:02 PM

Future Tense Newsletter: Predictions Gone Bad

Greetings, Future Tensers,

This week, Future Tense continues to celebrate the “Future of the Future,” a look at how we try to predict tomorrow. It seems like every major tech breakthrough is always “five to 10 years away.” Grace Ballenger and Aaron Mak took a look at dozens of predictions of the technologies coming in “five to 10 years” from the past three decades, such as consumer virtual reality. You’ll notice that many of them still haven’t come to fruition. When it comes to fashion, it might be a relief that some of our wildest visions didn’t come true, as demonstrated in this fun compilation of the best of future fashion from television and movies.


But futurism isn’t just all VR goggles and weird hairdos. ”Threatcasting” attempts to imagine the worst-case scenarios of our future to help prepare us for a safer one, explain Brian David Johnson and Natalie Vanatta. One inherent flaw to the field of futurism, says Joey Eschrich, is that it tends to assume capitalism will continue. Mark Joseph Stern points out that even Supreme Court justices try to make predictions about the ramifications of legal decisions, even if their guesses are normally pretty far off.

Other things we read this week while cringing at another story on discriminatory A.I.:

  • Better laws for big data: Rep. Ted Lieu advocates for laws that would require faster disclosures of data breaches after the Equifax hack.
  • International “technoneurosis”: Chen Qiufan argues that China needs to stop letting “fear and greed” drive its technological progress
  • Buckle up: The Trump administration thinks the best regulation for self-driving cars is almost no regulation. April Glaser explains why that’s a very bad idea.
  • Cringe-worthy keywords: After ProPublica reported on how Facebook’s automated advertising tools let people target “Jew haters” and other offensive categories, Slate uncovered the problem goes much, much deeper.
  • Move over, space race: Competition between China and the United States to build the world’s fastest supercomputer could help us make better predictions about future events like earthquakes, writes April Glaser.
  • Mario has nipples: Jacob Brogan asks what other secrets our favorite video game characters’ bodies may hold.

The Future of Mental Health Technology: From chatbots that provide therapeutic conversation to apps that can monitor phone use to diagnose psychosis or manic episodes, medical providers now have new technological tools to supplement their firsthand interactions with patients. Join Future Tense in Washington on Sept. 28 to consider how these and other innovations in technology are reimagining the way we treat mental illness. RSVP to attend in person or watch online here.

Is Big Tech an Existential Threat?: In World Without Mind: The Existential Threat of Big Tech, a powerful critique of the role companies like Amazon and Google play in our economy and in our lives, Franklin Foer argues that the success of these tech juggernauts, with their gate-keeping control over our access to the world's information, has created a new form of dangerous monopoly in America life. Join Future Tense in New York on Oct. 4 as Foer discusses his new book with Slate Group Editor-in-Chief Jacob Weisberg. RSVP to attend here.

No longer nostalgic for the ’90s,
Tonya Riley
For Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University.

Sept. 20 2017 3:02 PM

Why Do the Terms of Service for This Dating App Have a Whole Section on Polo?

The Inner Circle is one of those “elite” dating apps that promises to connect its users with a better pool of potential dates than the regular Joes overrunning Tinder and OKCupid. So naturally its terms of service include a lengthy section about polo, the horseback sport favored by British aristocrats. Wired writer Graham Starr pointed out the oddity on Twitter:

Can we really blame the app for taking polo so seriously? Clearly its founders have cracked the key to dating success, and polo is it. Well, Polo and LinkedIn profiles, which is how the Inner Circle vets potential members. Launched in Europe (ooh la la) in 2013, the app came to New York last year and in 2017 has set its sights on expanding across the U.S. And to do that, it’s got to expand membership, which it can’t do unless it makes sure everyone is on the same page about its annual polo event.


The terms and conditions start off seeming pretty standard: your usual sections like Definitions, Conflict and Modifications, Eligibility, Registration, etc.—all the stuff you probably skip over because no one really reads terms and conditions. But then, after the Third Parties heading comes Events, and under that, the all-important Summer Polo Festival. How did people even meet and fall in love before dating apps threw polo festivals to facilitate it?

The rules boil down to the following: 1) You attend at your own risk; 2) the app “will not accept responsibility or liability whatsoever for any kind of injury, loss or damage” that occurs at the polo festival; 3) by attending you agree that the app can do whatever it wants with photos and recordings of you “for worldwide exploitation, in perpetuity in any and all media”; and 4) “Drinks and food are not included in the ticket price. It’s not permitted to bring your own drinks and food to the event.” Glad we’re clearing this up first!

First of all, I had no idea it was so dangerous to attend polo events. Maybe that’s just the risk you take when you want to meet people these days? Like, if you’re looking for love, you better reckon with the fact that you may have to get kicked in the face by a horse in order to find it. I’m imagining myself attending this event, and instead of meeting an elite fellow heterosexual to date, I get a dire injury, and the Inner Circle not only refuses to help me but films the incident and uses the footage for nefarious exploitation purposes in perpetuity. I don’t fall in love, but somehow other people fall in love over my injury. Also, I am very hungry the whole time because I am not allowed to have my own food and drinks. And according to the terms of service that I agreed to, this is all completely legal. Damn. The things we do for love.

Sept. 20 2017 2:43 PM

Amazon Is Suggesting “Frequently Bought Together” Items That Can Make a Bomb

When you go to Amazon to buy something, Amazon will suggest other products that are “frequently bought together.” This feature can be practical—if you’re buying a Swiffer Wet Jet, it’s probably wise to pick up the cleaning fluid it uses, too—or just strange. But as an investigation from a British television station pointed out Monday, sometimes Amazon’s suggestions can amount to a deadly combination.

A team from Channel 4 News found that Amazon has been prompting customers looking to buy “a common chemical used in food production” to also purchase other ingredients that, together, could be used to produce black powder. (The report did not specify exactly which ingredients, for obvious reasons.) Further down the page, according to the report, Amazon also nudged the customer to buy ball bearings, which can be used as shrapnel in homemade explosives.


Amazon has responded by saying it’s reviewing its website to make sure that products are “presented in an appropriate manner.” Still, the report comes at a time of heightened fear of terrorist attacks in the U.K. On Friday, a homemade bomb left in a plastic bucket detonated on a crowded train car in London, injuring 30 people. It was the fifth terrorist attack in Britain this year.

Though the “frequently bought together” items on Amazon aren’t illegal on their own, the Channel 4 News report noted that there have been successful prosecutions in the U.K. against people who buy chemicals that can be combined to make a bomb.

Amazon’s “frequently bought together” suggestions are generated algorithmically. Which puts Amazon a growing list of major tech companies currently under fire for relying on algorithms that surface troubling results. On Sept. 14, ProPublica released a report detailing how Facebook lets advertisers target ads directly to people who describe themselves as “Jew haters” or are interested in the topic of “how to burn Jews.” Slate took the investigation further, finding Facebook suggested we send ads to users interested in topics like “killing bitches,” “threesome rape,” and “killing Haji.” BuzzFeed also found that Google’s ad targeting tool suggested ad buyers consider targeting users with topics like “black people ruin neighborhoods” and “Jewish control of banks.” Google and Facebook have both said that they are working to change how they let advertisers target users.

The whole point of algorithms is that they work on their own, so that Amazon doesn’t need to hire a person to sit there and think of items that might go well together. But that hands-free design doesn’t excuse Amazon, or any other tech company, from keeping a close eye on how its automated systems work in practice.

Sept. 15 2017 2:52 PM

Future Tense Event: Franklin Foer to Discuss World Without Mind With Jacob Weisberg

Tech companies like Google, Amazon, Apple, and Facebook have revolutionized our lives, connecting us in ways that were once unimaginable—to one another, to information, and to entertainment. Conventional wisdom leads us to believe that the technologies unleashed by these corporations have empowered us as individuals. But is that really the case?

In World Without Mind: The Existential Threat of Big Tech, a powerful critique of the role these companies play in our economy and in our lives, Franklin Foer argues that the success of these tech juggernauts, with their gatekeeping control over our access to the world's information, has created a new form of dangerous monopoly in America life. Does our infatuation with the technological wonders these companies offer distract us from the price we pay as a society in terms of surrendered privacy, intellectual property rights, and diversity of worldviews? Is our sense of individual empowerment merely an algorithm-fed illusion?


Join Future Tense on Wednesday, Oct. 4, in New York for a conversation with Franklin Foer and The Slate Group Chairman Jacob Weisberg to discuss World Without Mind and the role of these new technologies in our lives. For more information and to RSVP, visit the New America website.

Sept. 15 2017 2:14 PM

We Need a Law Requiring Faster Disclosure of Data Breaches—Now

The Equifax hack is highly disturbing not only because of its massive scope, but also because of the specific type of personal data that was stolen. Credit reporting agencies are supposed to be one of our lines of defense in data security and privacy protection—and Equifax failed in its core mission. Moreover, by waiting six weeks to notify customers, Equifax robbed them of the crucial window during which they may have been able to stem some of the damage. Now, people claiming to be the hackers are demanding Equifax pay roughly $2.6 million in Bitcoin, threatening to dump data on nearly all those affected if they aren’t paid by Sept. 15.

In a world where one line of faulty computer code can mean the difference between normalcy and chaos, it is often not a question of if, but when, the most sensitive systems will be hacked. Given this reality, we must improve our ability to react at every level after companies have been breached. The Equifax debacle exposed three deficiencies in our laws that need to be corrected: We need better protections for consumers, a national reporting system for data breaches, and strong cybersecurity standards for credit reporting agencies.


Companies that hold our most sensitive data need to rethink their relationship with the public. Executives at major firms swear no oaths, but they are just as responsible for the well-being of the American people as any member of Congress—especially today, when companies collect and analyze more data on the average citizen than the government does. Equifax failed not because its defenses were impenetrable. Rather, it failed because it took its role as digital gatekeeper for granted. Reports show that Equifax failed to apply a known patch that may have prevented the data breach.

In the aftermath of an attack, every employee—from the CEO to the interns—has to focus on two key goals: stop the bleeding and restore confidence. Instead, Equifax customers were faced with predatory and woefully inadequate services. The company’s rollout of a website used to inform customers of their account status was riddled with technical flaws. In some instances, the very programs Equifax offered to monitor the status of user data was flagged by antivirus software as a phishing scam itself.

If users did manage to get a straight answer about the status of their data, they soon discovered they were barred from suing Equifax due to a fine-print mandatory arbitration clause. Thanks to New York’s attorney general, Equifax has changed its policy—at least in the case of this hack. Yet the fact remains: It is outrageous that Equifax was planning to take advantage of its customers’ precarious position by stripping their rights to sue if they relied on the company’s identity theft service.

To end this consumer abuse, I plan to introduce legislation that would prevent companies from enacting their forced arbitration clauses in the event of a data breach. While my colleagues and I will focus intently on Equifax during the digital autopsy phase to come, we also have to turn our gaze inward. We need to pass a national data breach notification law—now.

Currently, a muddled patchwork of 48 different state laws governs when and how companies are required to report data breaches. Aside from disadvantaging people who live in states with more lax reporting requirements, it also complicates things for companies that want to comply. Increasingly, data isn’t stored in one single place. Depending on a firm’s network architecture, a user’s account information can exist in, say, Newark, Los Angeles, and Chicago all at the same time. That means three—or often more—competing sets of laws.

Add to this the fact that Equifax and similar firms often fall through the regulatory cracks when it comes to oversight (credit reporting agencies are less heavily regulated and monitored than banks, although they hold a goldmine of data) and a stark picture emerges. Strong cybersecurity standards may have prevented this breach. On this front, I plan to offer legislation that would compel credit reporting agencies to adopt clear cybersecurity standards similar to those of the financial industry.

In the coming weeks, Equifax and its top executives will be scrutinized by investigators at the FBI, FTC, and several congressional committees. Congress must serve as a catalyst for action, bringing together consumers who demand better cybersecurity, encouraging agencies to conduct thorough oversight, and helping firms recognize that post-incident services are a crucial part of good data stewardship. Together, we can begin to develop a system that works for the 21st century.