Future Tense
The Citizen's Guide to the Future

June 28 2017 3:39 PM

Facebook’s Leaked Censorship Policies Show How Bad the Company Is at Policing Hate Speech

On Wednesday morning, ProPublica published a troubling report about Facebook’s approach to censorship. Drawing on a “trove of internal documents,” it laid out some of the rules that the company’s content reviewers use to determine whether they should censor a post. (It's not clear from the article whether those moderators are employees or subcontractors, though Facebook has relied on the latter group in the past.) Those documents underscore just how clumsy the company can be when it comes to dealing with hate speech, partly because it insists on tackling the issue in algorithmic terms.

As ProPublica’s headline puts it, “Facebook’s Secret Censorship Rules Protect White Men from Hate Speech but Not Black Children.” While that’s just part of the problem, it’s also not hyperbole. To the contrary, a training slide reproduced by Pro Publica establishes that very distinction, asking which of three groups—female drivers, black children, and white men—it aims to protect. Puzzlingly, it uses a photo of the Backstreet Boys to illustrate white men, but more baffling is the answer to the query: Out of that trio, Facebook only “protects” white men.


As ProPublica goes on to explain, this is an effect of the way Facebook defines its “protected categories” and the way those categories relate to one another. The company reportedly includes a broad array of terms under its protected rubric, including race, gender identity, sexual orientation, and national origin. On the other hand, it declines to protect a range of other categories, including social class, age, occupation, and appearance. Say something awful about the members of a protected category (e.g., “Women deserve to be beaten”), and your comments might get censored. Similar assertions about the members of a non-protected category such as an age demographic (e.g. “Millennials should all be set on fire”), on the other hand, will inspire no action from the site.

While the logic determining what counts as a protected category is already opaque, things get even more complicated when Facebook’s users start to combine these categories. As ProPublica shows in a series of slides, if a user pairs two protected categories together (“Irish women”), the resulting conglomerate is still considered a protected category. If, on the other hand, someone links a protected category to a nonprotected category (“Irish teens”), the composite is not protected, and users can say whatever they want with impunity. This is why Facebook considers threats against white men hate speech while it ignores those against black children: While the former combines two protected categories (race and gender), the latter includes one protected and one non-protected category (race and age, respectively).

In effect, the company’s approach is algorithmic, even if humans implement the rules of that algorithm. As the slides from ProPublica show, Facebook’s censorship principles are reducible to a simple set of equations—“PC + PC = PC while PC + NPC = NPC,” for example. As Will Oremus has observed, content moderation can be difficult, taxing work. These spare formulas may well be a blessing for the company’s human censors, giving them the tools to quickly determine what’s acceptable and what’s not with a modicum of thought. They’re given a set of simple instructions that work everywhere, freeing them from the burden of granular judgment.

But that same convenience also makes the system easy to exploit: Those looking to denigrate a given group need only apply a well-chosen modifier if they want to avoid oversight. Indeed, as ProPublica notes, Donald Trump’s anti-Muslim posts “may also have benefited from the exception for sub-groups. A Muslim ban could be interpreted as being directed against a sub-group, Muslim immigrants, and thus might not qualify as hate speech against a protected category.” Where the most complex computer algorithms threaten to reaffirm existing sociocultural biases, this relatively simple human algorithm offers the biased an out, so long as they’re willing to get specific about their hate.

While the social network may well recognize that flaw, it’s clear enough why it employs this system: Above all else, Facebook’s censorship policy is defined by the utopian assumption of universal egalitarianism. As ProPublica notes, the social network attempts to apply the same rules everywhere, a few regional exceptions aside. Accordingly, it begins from the presumption that all of those who fall under its protected categories are potentially subject to hate speech. Thus, “ban men” might potentially be understood as hate speech (since it calls for the exclusion of all men), in much the same way that calls for violence against women would be.

The trouble is that hate speech doesn’t play out in sanitized vacuums: Slurs accumulate in real circumstances and real time, drawing strength from the particularity each new repetition. Where Facebook apparently aims to treat such language in the abstract, it is this situational specificity that hones the edges of ugly words, giving them the power to cut. The company’s so-called “non-protected categories” don’t just offer users an out when they want to say something vile about one group or another; they also threaten to make vile language that much more violent.

Simply put, Facebook doesn’t understand how hate speech works. Language’s potential to harm is inevitably proportional to the marginality of those it targets. While it’s certainly possible, for example, to say loathsome things about white men, those insults will rarely, if ever, have the same weight as those made against already imperiled groups. Further, marginality, for its own part, is always a circumstantial problem, not one that’s everywhere the same, which makes Facebook’s universally inclined approach that much more meaningless. In practice, acting as if all language was the same everywhere and for everyone can only have one effect: Reaffirming the security of those who are already in power by shielding them from criticism.

For Facebook, that may well be a winning proposition.

June 28 2017 2:29 PM

Future Tense Newsletter: Gay Dating Apps Create Connections Beyond the Bedroom

With tens of millions of users in some 200 countries, it’s hard to overstate the reach of dating apps for gay and bisexual men. Now, these apps are starting to reach beyond their hookup origins to offer their users new ways to explore their identities and forge connections. Apps like Grindr, Hornet, and Scruff are engaging users by organizing events, covering queer topics in online publications, and engaging in advocacy work. Brandon Tensley explores the extent of their impact, writing, “These apps are playing host to conversations—silent and verbal, private and public—about what, exactly, the queer experience can entail. They’re helping, in other words, make the connections so many queers have been yearning for all along.”

In other news, the Department of Homeland Security is starting to scan passengers’ faces before they board international flights. Decades ago, Congress mandated that federal authorities keep track of foreign nationals as they enter and leave the United States. But with the launch of its new “Biometric Exit” program, DHS is—without congressional authorization—also scanning the faces of American citizens. This opens the door to a host of privacy and civil liberty issues that will continue to fly under the radar. Meanwhile, Andrea Peterson writes that despite Trump’s own complaints of having his “wires tapped,” he has yet to fill vacancies on a board that serves as a key government watchdog charged with overseeing surveillance activities.


Other things we read this week while considering the consequences of Trump’s attacks on federal lands:

  • Culinary technology: Jacob Brogan samples a meal delivery service that is trying to make its name by focusing on gee-whiz tech.
  • Quakebot: Last week, the Los Angeles Times Quakebot tweeted about an earthquake that happened 90 years ago. Angelica Cabral explains what caused the confusion and what this incident teaches us about where our data comes from.
  • Musk’s plans for Mars: Earlier this month, Elon Musk published his plans to make humans a multiplanetary species. Andrew Coates, a professor of physics at the University College London, questions just how plausible Musk’s plans to colonize Mars actually are.
  • Dangers of social media: Emily Parker counters the argument that Facebook and Twitter are destroying America and warns against excessive techno-pessimism.
  • EU fines Google: On Tuesday, European Union regulators slapped Google with a $2.7 billion antitrust fine for unfairly promoting its own shopping comparison services over those of its rivals.


  • Three years after the release their best-selling book, The Second Machine Age, MIT’s Erik Brynjolfsson and Andrew McAfee are back with a deep dive into the key forces driving our increasingly digital age. Join Future Tense on Thursday, June 29 (yes, tomorrow!), in New York for a conversation with the pair about their latest book, Machine, Platform, Crowd, and about how to build a future that doesn’t leave humans behind. RSVP to attend here.
  • Join Rep. Ted Lieu, a Democrat from California’s 33rd Congressional District, on Thursday, July 13, for the latest installment of our “My Favorite Movie” series. He’ll be hosting a screening Ex Machina at Washington, D.C.’s Landmark E Street Cinema. You may RSVP for yourself and up to one guest here.

Will make you read this article,
Emily Fritcke
For Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University.

June 28 2017 12:22 PM

Come Watch Ex Machina With Rep. Ted Lieu in Washington, D.C.

Join Rep. Ted Lieu, a Democrat from California’s 33rd congressional district, on Thursday, July 13, in Washington, D.C., for a screening and discussion of the 2014 BAFTA-nominated film Ex Machina. This science-fiction thriller follows a young programmer who is invited by his eccentric CEO to spend the weekend testing the humanity of an intelligent female robot. In the process, he’s forced to question his own morality.

The latest installment of our “My Favorite Movie" series will take place at Washington, D.C.'s Landmark E Street Cinema at 555 11th Street NW. Seating is limited. For more information and to RSVP, visit the New America website.

June 28 2017 11:20 AM

Eastern European Countries Are Getting Very Good at Responding to Cyberattacks

The same countries are regularly ranked as having the greatest offensive cyber capabilities—Russia, the United States, Israel, China. But when it comes to the nations with the greatest ability to absorb, withstand, and recover from cyber security incidents, there’s a strong case to be made that we all have something to learn from Eastern Europe. The classic case is Estonia, which bounced back from aggressive denial-of-service attacks in 2007 to become a world leader in promoting international efforts for computer security. But this week’s reports of malware directed at Ukraine’s state power distributor and banks appear to offer another model for how to respond to a cybersecurity breach.

Ukraine is no stranger to seeing its computer systems targeted—earlier this month, Wired’s Andy Greenberg described in incredible detail Russia’s ongoing efforts to compromise Ukraine’s cyber infrastructure over and over and over again. This most recent strain of malware may or may not come from Russia. In one of the most understated accusations ever made in the immediate aftermath of a cybersecurity incident, Ukraine’s security council secretary reportedly said of the breach, “it is possible to talk of Russian fingerprints.”


Ukraine seems to be rallying surprisingly well in the wake of the intrusions. Reuters reported that Ukrainian lender Oschadbank had not lost any customer data due to the unidentified virus, while power distributor Ukrenergo said the malware had not affected its power supplies. Meanwhile, the deputy prime minister, Rozenko Pavlo, tweeted a photo of his computer screen showing a suspicious error message, and the country’s main Twitter account tweeted a surprisingly humorous announcement reassuring people:

Certainly, the attacks were disruptive. There were reports of ATMs not working and subway passengers who were unable to enter the metro using their bankcards. The boards listing flight schedules at the Boryspil International Airport in Kiev, as well as the airport’s website, were malfunctioning. Ukrainian officials have not commented on what type of malware they’re seeing, referring to it only as an “unknown virus,” and the screenshot that Pavlo tweeted doesn’t indicate any ransom demands. But Christiaan Beek, a lead scientist at McAfee, told Wired that it appeared to be a strain of the ransomware family dubbed Petya or Petrwrap, itself a relative of the WannaCry ransomware that affected systems worldwide earlier this year.

But where WannaCry was disastrous—crippling the U.K.’s National Health Service and leaving hospitals unable to treat patients—in Ukraine, at least, this latest incident seems ultimately to have been only disruptive, not destructive. Ukrainians were clearly inconvenienced, but pretty much every single piece of infrastructure targeted there was able to limit the damage. Though banks were unable to perform some services, their customers’ money and data were secure. The power distributor saw some of its computers infected but did not witness another blackout due to the compromise, as it did late last year. People couldn’t use their bank cards to enter the subway, but the trains were still running. The airport’s website and flight boards were affected, but flights continued to arrive and take off, though airport director Pavlo Ryabikin warned that there might be some delays. At the Chernobyl nuclear plant, computers were down, but workers were continuing to monitor radiation manually.

It’s possible that this incident was never meant to be anything more than an inconvenience—that whoever launched it only wanted to cause minimal damage and disruption rather than all-out catastrophe. But it’s also possible that this was in fact meant to cause damage of a comparable scale and scope to WannaCry and previous cyber attacks directed at Ukraine—that it was intended to shut down businesses, black out cities, and stop trains and flights. And if that’s the case, then Ukraine did a remarkable job containing the damage and defending the most critical elements and functions of its critical infrastructure.

In the United States, it was not clear that things were going as smoothly. Pharmaceutical company Merck, which is based in New Jersey but also has an office in Ukraine, had to shut down computer systems and the Washington Post reported that “critical information tied to Merck drug research could be lost.” Meanwhile, the New York Times reported hospitals being forced to cancel operations in Pennsylvania.

Like Estonia, Ukraine seems to have learned from past cyber conflicts how to do a better job protecting computer systems and ensuring that daily life can continue largely unhindered even in the face of serious compromises. Of course, both Eastern European nations have much smaller populations than the United States, and it would be difficult to scale some of their solutions to a country so much larger and more decentralized. But Estonia and Ukraine also have fairly advanced, online infrastructure, suggesting that they may well have defensive lessons to offer even to countries with relatively sophisticated technology, who may have spent more time and energy thinking about how to use computers to attack others than how to protect themselves.

June 27 2017 11:40 PM

This Study on the Most Effective Facebook Headlines Will Make You Cry Tears of Recognition

It’s hard to pinpoint exactly when online headlines began to converge on the now-familiar set of tropes that dominate our Facebook feeds, but a good guess might be 2012, the year Upworthy was founded and BuzzFeed’s traffic boomed. “You’ll Never Guess … ”; “33 Animals Who … ”; “What Happened Next”: Listicles and curiosity gap headlines proliferated as sites across the Web sought to mimic the viral success of posts painstakingly engineered to generate likes and clicks on social media. (For a stroll down memory lane, you can try our Facebook news feed headline quiz.)

It wasn’t long, of course, before the tropes became overly familiar, the gimmicks stale. By 2014, Upworthy had already peaked, and the Atlantic’s Alexis Madrigal reported that the curiosity gap was closing. Indeed, Upworthy announced in 2015 that it was pivoting from viral aggregation to original video content. Yet BuzzFeed has managed to sustain its upward trajectory by continually reinventing itself. And Upworthy’s abandonment of its once-successful formula in 2015 has not proved quite the death knell for social-media growth-hacking that it might have seemed at the time.


A new study of social media headlines from the content-analytics firm BuzzSumo suggests that the curiosity gap remains very much open in 2017, and the listicle’s appeal has endured as well. That said, some popular headline formulas appear to be working much better than others these days.

The study looked at 100 million headlines published between March 1 and May 10, 2017 to find the popular three-word phrases, or trigrams, that correlated with the highest and lowest levels of engagement on Facebook. Here are the top performers:



What’s surprising here is just how effective second-person headlines seem to be in provoking reactions from readers. “Will make you” is not only the highest-rated trigram for social engagement, but headlines that include the phrase drive more than twice as much traffic as any others in the study. (Note that BuzzSumo did not evaluate every possible trigram; the study’s author, Steve Rayson, told me it included only those that appeared in headlines on at least 100 different domains.) Typical “will make you” headlines, according to BuzzSumo, include “24 Pictures That Will Make You Feel Better About The World” and “What This Airline Did For Its Passengers Will Make You Tear Up — So Heartwarming.”

At first glance, the key commonality seems to be the direct appeal to readers’ emotions, which was one of Upworthy’s founding insights. The importance of emotion in a Facebook headline is underscored by the presence on the list of phrases such as “are freaking out,” “tears of joy,” “give you goosebumps,” and “melt your heart.”

But BuzzSumo’s Rayson pointed out to me in an email interview that there is a second genus of “will make you” post that also performs extremely well: the productivity/life-hack listicle. One of the most-shared headlines in the whole study, for instance, was, “10 Graphs That Will Make You Pro at Cleaning Anything.” (Ten, incidentally, is the optimal length for a listicle, according to BuzzSumo’s data.)

This suggests that the secret to the phrase’s success lies not only in its appeal to emotion, but also in its explicit promise to impact the reader in a specific way. These headlines work, in other words, by acknowledging the transactionality of the relationship between publisher and reader: You give us a click, and here’s exactly what we’ll give you in return. Another way of looking at it: These headlines make the story about you, the reader, rather than about some third-party subject.

That’s also true, in different ways, of the second- and third-top-performing phrases in the study: “this is why” and “can we guess.” The first promises to answer a specific question that the reader is curious about. The second, associated with quizzes such as BuzzFeed’s “Can We Guess Your Age Based on Your Sense of Humor?,” promises to hold a mirror to the reader based on her habits and tastes.

Interestingly, the phrase “what happened next”—one of the most infamous of the original Upworthy clichés—still seems to resonate, making the list at No. 20. But “you’ll never guess” is nowhere to be found: At some point, it evidently crossed the line into parody.

The phrases that generate the least Facebook engagement are instructive in their own right.



The bottom three—“control of your,” “your own business,” and “work for you”—all include the word “you,” which indicates that lots of publishers have internalized the notion that the second person works well on Facebook but haven’t quite figured out how to make it, well, work for them. One trend here is the emphasis on work or business: Facebook, it seems, is a place where people go to avoid work, and they don’t seem to like being reminded of it. Perhaps these headlines would fare better on LinkedIn.

Likewise, Rayson points out in his blog post that the phrase “on a budget” seems to be a turnoff on Facebook, yet it performs much better on Pinterest in conjunction with DIY projects. One lesson might be that each social media platform demands a different framing, which poses a challenge for headline writers accustomed to writing just one or two headlines per story.

There’s much more of interest in Rayson’s long post, and it’s worth reading in full for those who care about how social media shapes communication. It’s also worth a closer look at the study’s methodology before you draw firm conclusions from it.

By focusing on trigrams, BuzzSumo appears to have followed a similar approach to that employed by Max Woolf in a 2015 post that looked exclusively at BuzzFeed headlines. Yet whereas Woolf went deep on a single site, BuzzSumo did the opposite, including in its analysis no more than one headline per trigram from a given site in order to avoid overweighting posts from the most popular publishers. This led to some significant differences in the results: Whereas BuzzSumo identified the most popular listicle length as 10, Woolf found that the best-performing BuzzFeed listicles were much longer, often upwards of 30 items. Still, the trigram results should be familiar: the top headline phrases in Woolf’s 2015 analysis were “Character are you,” “before you die,” and “you probably didn’t.” Again, the word “you” is in all of them.

One error of interpretation that would be easy to make: assuming that the use of these phrases necessarily causes headlines to succeed or fail on Facebook. “Will make you” headlines may work at least in part because they tend to be attached to content that actually does resonate with a lot of readers in some way. Slapping that phrase on a post that doesn’t actually make readers do anything is likely to backfire, especially since Facebook has repeatedly altered its news feed rankings to punish publishers whose headlines make promises that their content doesn’t fulfill. To its credit, BuzzSumo makes this point in its own post about the study, including commentary from several social media pros who warn against using the trigram charts as a headline-writing cheat sheet.

This study might read as depressing to those who had hoped the worst of the headline-gimmick era was behind us. But there are at least two good reasons not to weep for the future of journalism and online discourse. (Aren’t you curious to know what they are?)

The first is that the study doesn't actually tell us much about the relative prevalence of headline cliches on the Web today versus any other point in time. Because the study only includes phrases that appear in headlines on at least 100 sites, the ones that make the cut are bound to be generic and formulaic-sounding—anything distinctive or unique is ruled out of the running. Its conceivable that the best-performing headlines of all are those that eschew these stock phrases altogether, but the study can't tell us that.

The second consolation is that headline clichés have existed for just about as long as headlines have. In the print era, they tended toward terse, impersonal jargon rather than pathetic entreaty, but that didn't necessarily make them better. "Dems Seek to Interview Aide" is a headline you're unlikely to run across in your Facebook feed, and good riddance to it and its ilk.

Today's social media headlines may appear gimmicky, and no doubt some still are. But when a gimmick endures after the novelty wears off—when it proves resilient to the backlash and to changing tastes and algorithms and market conditions—eventually it’s no longer a gimmick. At this point, motifs such as “will make you” and “10 reasons why” are simply embedded in the social media firmament.

June 27 2017 5:20 PM

Why the EU Just Slapped Google With a $2.7 Billion Fine

Google’s massive bank account may just have gotten a tiny bit smaller. On Tuesday, European Union regulators slapped the Silicon Valley behemoth with a $2.7 billion antitrust fine.

At core, the penalty comes down to the way Google presents shopping results. Explaining the fine, EU antitrust chief Margrethe Vestager claimed that the company was unfairly promoting its own shopping comparison services higher than those of its rivals. “That is an abuse. It is illegal under European antitrust laws,” she said. Bloomberg writes that the European Competition Commission expects Google to “give equal treatment to rival price-comparison services. … It’s up to Google to choose how it does this and inform the EU of its plans within 60 days.”


This isn’t the first time that Google has come under fire for supposedly anticompetitive activities in the EU. In 2014, for example, the European Parliament considered a nonbinding resolution that would have called on search engine providers to uncouple that service from more commercial practices. More recently, Vestager charged that the company was suppressing competition on Android devices by pushing its own apps and services onto consumers. The Federal Trade Commission has conducted similar investigations of Google in the United States over the last decade, though the two ultimately learned to get along.

In an official blog post, Google general counsel Kent Walker responded to the fine by arguing that Google’s shopping comparison results provide a service to consumers. “While some comparison shopping sites naturally want Google to show them more prominently, our data shows that people usually prefer links that take them directly to the products they want, not to websites where they have to repeat their searches,” Walker wrote. By way of example, his post provides an image of comparison results for “puma shoes” that includes links to five different European retailers.

While conducting a similar search in the United States yield similar results at the top of the page, the real story may be in the information box that accompanies the “sponsored” acknowledgment in the upper left-hand corner of the results. Pulling up that notification informs users that “Google may be compensated by some of these providers.” In other words, while those results may be convenient, Google is all but showing us advertisements when it pulls them up. If the company’s “data” does, indeed, reveal that this is what users want, it’s using that data to take advantage of our own inclinations.

Moreover, there’s no guarantee we’re seeing the best possible deals, despite the semblance of definite authority. And though other companies provide similar comparative shopping results, Google’s setup makes it less likely that consumers will find their way to those alternative sites. It’s this risk—the very real possibility that Google is suppressing the results of its competitors while promoting its own interests—that worries Vestager and other EU regulators.

The setup of these search results also gets at a larger problem with Google’s increasingly complicated search results, though it diverges somewhat from the EU’s immediate concerns. By design, the company’s infoboxes appear to cut through clutter, seemingly giving us a just-the-facts approach to whatever we’re looking for. Structurally, they’re given a position of authority on the page, literally framing information that pulls the eye. The trouble is that the company’s rationale for presenting that information can be opaque—and the information itself is sometimes even wrong.

These are problems that will likely persist, even if Google finds a way to respond to the Vestager’s concerns—as she has demanded that the company do within the next 90 days. Meanwhile, it’s possible that other, similar government actions may be coming in the years ahead. Amazon, for example, recently received a patent to minimize comparison shopping in its brick and mortar stores. Vigorous antitrust efforts may provide an important check on such corporate projects, but it’s still worth remembering that information technologies are never far from technologies of control.

June 26 2017 1:22 PM

How Plausible Is Elon Musk’s Plan to Colonize Mars?

This post originally appeared on The Conversation.

The Conversation

Elon Musk, the founder of SpaceX and Tesla, has released new details of his vision to colonize parts of the solar system, including Mars, Jupiter’s moon Europa, and Saturn’s moon Enceladus. His gung-ho plans—designed to make humans a multiplanetary species in case civilization collapses—include launching flights to Mars as early as 2023.


The details, just published in the journal New Space, are certainly ambitious. But are they realistic? As someone who works on solar system exploration, and the European Space Agency’s new Mars rover in particular, I find them incredible in several ways.

First of all, let’s not dismiss Musk as a Silicon Valley daydreamer. He has had tremendous success with rocket launches to space already. His paper proposes several interesting ways of trying to get to Mars and beyond—and he aims to build a “self-sustaining city” on the red planet.

The idea depends on getting cheaper access to space—the paper says the cost of trips to Mars must be lowered by “five million percent.” An important part of this will be reusable space technology. This is an excellent idea that Musk is already putting into practice with impressive landings of rocket stages back on Earth—undoubtedly a huge technological step.

Making fuel on Mars and stations beyond it is something he also proposes, to make the costs feasible. Experiments toward this are underway, demonstrating that choosing the right propellant is key. The MOXIE experiment on the NASA 2020 rover will investigate whether we can produce oxygen from atmospheric CO2 on Mars. This may be possible. But Musk would like to make methane as well—it would be cheaper and more reusable. This is a tricky reaction which requires a lot of energy.

Yet, so far, it’s all fairly doable. But the plans then get more and more incredible. Musk wants to launch enormous spaceships into orbit around Earth where they will be refuelled several times using boosters launched from the ground while waiting to head to Mars. Each will be designed to take 100 people and Musk wants to launch 1,000 such ships in the space of 40 to 100 years, enabling 1 million people to leave Earth.

There would also be interplanetary fuel-filling stations on bodies such as Enceladus, Europa, and even Saturn’s moon Titan, where there may have been, or may still be, life. Fuel would be produced and stored on these moons. The aim of these would be to enable us to travel deeper into space to places such as the Kuiper belt and the Oort cloud.

The “Red Dragon” capsule is proposed as a potential lander on such missions using propulsion in combination with other technology rather than parachutes as most Mars missions do. Musk plans to test such a landing on Mars in 2020 with an unmanned mission. But it’s unclear whether it’s doable, and the fuel requirements are huge.

Pie in the sky?

There are three hugely important things that Musk misses or dismisses in the paper. Missions such as the ExoMars 2020 rover—and plans to return samples to Earth—will search for signs of life on Mars. And we must await the results before potentially contaminating Mars with humans and their waste. Planetary bodies are covered by “planetary protection” rules to avoid contamination, and it’s important for science that all future missions follow them.

Another problem is that Musk dismisses one of the main technical challenges of being on the Martian surface: the temperature. In just two sentences he concludes:

It is a little cold, but we can warm it up. It has a very helpful atmosphere, which, being primarily CO2 with some nitrogen and argon and a few other trace elements, means that we can grow plants on Mars just by compressing the atmosphere.

In reality, the temperature on Mars drops from about zero degrees Celsius during the day to nearly -120 degrees Celsius at night. Operating in such low temperatures is already extremely difficult for small landers and rovers. In fact, it is an issue that has been solved with heaters in the design for the 300kg ExoMars 2020 rover—but the amount of power required would likely be a show-stopper for a “self-sustaining city.”

Musk doesn’t give any details for how to warm the planet up or compress the atmosphere—each of which are enormous engineering challenges. Previously, science fiction writers have suggested “terraforming”—possibly involving melting its icecaps. This is not only changing the environment forever but would also be challenging in that there is no magnetic field on Mars to help retain the new atmosphere that such manipulation would create. Mars has been losing its atmosphere gradually for 3.8 billion years—which means it would be hard to keep any such warmed-up atmosphere from escaping into space.

The final major problem is that there is no mention of radiation beyond Earth’s magnetic cocoon. The journey to and life on Mars would be vulnerable to potentially fatal cosmic rays from our galaxy and from solar flares. Forecasting for solar flares is in its infancy. With current shielding technology, just a round-trip manned mission to Mars would expose the astronauts to up to four times the advised career limits for astronauts of radiation. It could also harm unmanned spacecraft. Work is underway on predicting space weather and developing better shielding. This would mitigate some of the problems—but we are not there yet.

For missions farther afield, there are also questions about temperature and radiation in using Europa and Enceladus as filling stations—with no proper engineering studies assessing them. These moons are bathed in the strongest radiation belts in the solar system. What’s more, I’d question whether it is helpful to see these exciting scientific targets, arguably even more likely than Mars to host current life, as “propellant depots.”

The plans for going further to the Kuiper belt and Oort cloud with humans is firmly in the science fiction arena—it is simply too far and we have no infrastructure. In fact, if Musk really wants to create a new home for humans, the moon may be his best bet—it’s closer after all, which would make it much cheaper.

That said, aiming high usually means we will achieve something—and Musk’s latest plans may help pave the way for later exploration.

June 23 2017 12:19 PM

Netizen Report: Arrest and Web Censorship Spark Online Protests in Palestine

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Afef Abrougui, Mahsa Alimardani, Renata Avila, Ellery Roberts Biddle, Marwa Fatafta, Leila Nachawati, Dalia Othman, Elizabeth Rivera, and Sarah Myers West contributed to this report.

new advox logo

Censorship has been on the rise in Palestine in recent weeks. On June 12, officials from the Palestinian Authority demanded that internet service providers in the West Bank block a reported 22 websites, most of which are affiliated with the opposition Islamist party Hamas or are otherwise critical of President Mahmoud Abbas. The websites appear to be blocked only in the West Bank.


An anonymous official from the attorney general’s office said the sites were blocked for violating “rules of publication“ but did not offer further specification. The 1995 Press and Publication Law includes several vague restrictions on freedom of expression, including a rule that forbids the press from “contradict[ing] the principles of … national responsibility” or publishing material that is “inconsistent with morals.”

The Haifa-based Arab Center for Social Media Advancement, also known as 7amleh, denounced the order, saying, “[We] find that this move fully contradicts all international treaties and conventions, and marks a significant violation of the digital rights of segments of Palestinian society.”

Online, Palestinians have expressed frustration over the blocking and lack of transparency around the PA’s order. They have launched a campaign under an Arabic hashtag that translates to “no to blocking” and are demanding that the attorney general explain the decision in a public statement.

This spate of online censorship comes on the heels of the June 8 arrest of Nassar Jaradat, a young Palestinian Facebook user. The PA charged Jaradat with “insulting and defaming public officials” in a Facebook post critical of Jibril Al Rajoub, a prominent figure among PA leadership. In a recent interview with the Israeli news program Meet the Press, Al Rajoub said that the Western Wall in occupied East Jerusalem should “remain under Israeli sovereignty,” a statement denounced by many Palestinians.

In his Facebook post, Jaradat said of Al Rajoub’s statement: “To give what you don’t personally own to those who do not deserve it. This is the essence of deception and the terror of concession.”

Jaradat could risk anything from three weeks to two years in jail, in accordance with a provision on “defamation, insult and abasement” in the Jordanian Penal Code of 1960, which is still applicable in the West Bank.

Activists expose Mexico’s multimillion-dollar surveillance tech market
Mexican human rights lawyers, journalists, and anti-corruption activists were targeted by spyware acquired by the government, according to research published this week by a group of nongovernmental organizations from Mexico and Canada. The spyware was purchased by Mexican authorities from the Israeli company NSO Group, under an explicit agreement that it be used only to investigate criminals and terrorists. Among those targeted were prominent journalists, lawyers investigating the mass disappearance of 43 students, and an American lawyer representing victims of sexual abuse by the police.

The government has denied engaging in surveillance and communications operations against human rights defenders without prior judicial authorization. However, research by the University of Toronto’s Citizen Lab suggests that the choice and targets and style of targeting “provide strong evidence the targeting was conducted without proper oversight and judicial accountability.”

Twitter censors Venezuelan government supporters
Venezuela’s information minister reported last week that at least 180 Twitter accounts belonging to government supporters and government-sponsored media channels have been suspended from the U.S.-based platform. On June 17, President Nicolas Maduro made a public statement condemning the suspensions as an “expression of fascism” and vowing to open thousands of new accounts. “The battle on social media is very important,” he said. Although Twitter’s guidelines prohibit violent threats, harassment, and “hateful conduct,” the company’s implementation of these rules is known to be uneven and unpredictable.

Spy tech threatens Chinese jaywalkers
Chinese cities including Jiangbei, Jinan, and Suqian have implemented facial recognition software to shame and fine citizens for jaywalking. Once captured, their images appear on big screens at intersections and their information—including a headshot, name, age, home address, registration, and ID number—are uploaded to a police system.

Japan’s anti-conspiracy bill puts citizens under microscope
On June 15, Japan’s parliament ratified a controversial “anti-conspiracy” bill into law. There are fears the vague nature of the new law, which covers nearly 300 crimes, will erode civil liberties in Japan by providing authorities with broad surveillance powers, leaving the question of who can be monitored open to interpretation. Joseph Cannataci, U.N. special rapporteur on the right to privacy, has criticized the bill and expressed concern that may “legitimize and facilitate government surveillance of NGOs perceived to be acting against government interest.”

New Research

#EgyptCensors: Evidence of Recent Censorship Events in Egypt“—Open Observatory of Network Interference

June 22 2017 5:52 PM

Why the Los Angeles Times Accidentally Tweeted About an Earthquake That Happened 90 Years Ago

On Monday, a bot led to incorrect information going out on Twitter. That’s hardly unprecedented. But this case was different: It was the Los Angeles Times telling the public there had been a 6.8 magnitude earthquake in Isla Vista, California, even though no such quake had taken place.

The tweet was prompted by an alert from the U.S. Geological Survey that said the earthquake happened on June 29, 2025. But it was actually referring to a real earthquake that happened almost a century ago.


So how did this happen?

A system at USGS records ground motion, analyzes is compared to other motion in the area to detect where it’s coming from, and then declares if an earthquake is occurring. This week, researchers tried to update the location of an earthquake from 1925. Thanks to a bug with the software that sends out email updates, subscribers received an email notification telling them an earthquake had occurred in 2025.

The Los Angeles Times Quakebot received this information and did what it’s designed to do: It wrote up basic information about the quake (location and magnitude) and then tweeted it out. As Will Oremus explained in Slate in 2014, the bot is in place not to replace real, live journalists, but to make it easier to release quick information about emergency situations.

However, this small mistake led some to think an actual, fairly large earthquake had taken place. Had that had been true, that’s guaranteed to change your day around a bit, especially if you’re a local reporter.

Soon enough people were taking to Twitter to ask what was going on. The Los Angeles Times quickly deleted the tweet and published an article explaining what had occurred.

So does this flub suggest that we should stop depending on bots to alert people to earthquakes? Lucy Jones, a seismologist and USGS scientist emerita, doesn’t think so.

Previously, updates would have to be run by a person, and Jones said that takes too much time in the case of a real emergency.

“The only way to not have misinformation is to not have information. [The bot] is completely accurate if the USGS email’s accurate,” she said. “This is the first mistake of this type that we’ve seen … and now it’s been fixed.” She thinks the bot is still worthwhile, particularly because when we expect flawless information, we stop thinking about where that data came from.

“I’m a little concerned about how people think information just sort of magically appears,” Jones said. “We get so accustomed to the weather showing up on our watch now, you don’t think about the billions of dollars of satellites and people and computers that go in to do weather modeling.”

Thankfully, this mistake doesn’t seem like a career ending move for the Los Angeles Times bot. Hopefully other early detection systems can also stay in place, like the tsunami monitoring stations in the Pacific Ocean, which this administration seeks to cut U.S. funding for.

June 22 2017 12:21 PM

Study: Hispanic Americans Use the Internet Less Than Any Other Ethnic Group

Hispanics use the internet the least of any ethnic group, according to research from eMarketer.

The study found that 79.8 percent of Hispanics use the internet at least monthly from any device—cellphones, tablets, desktops, etc. That’s compared to 84.3 percent of whites, 83.6 percent of Asians, and 82.5 percent of blacks. The report also predicts the gap will continue to shrink, but Hispanics still won’t reach the same usage levels any time soon: By 2021, it anticipates that 82.6 percent of Hispanics and 86.2 of whites will use the internet monthly.


These numbers are pretty similar to a 2016 report from Pew, which put the rate of Latinos using the internet at 84 percent. But in that survey, black Americans were the group that use it the least, with only 81 percent usage. (The Pew survey referred to its survey participants as Latinos, while the eMarketer survey used the term Hispanics. While many Latinos are Spanish-speaking, and the populations are similar, these terms are not interchangeable.)

According to Pew, a large part of the difference in usage comes from disparities in education and English proficiency levels.

Like any other segment of the population, there is a generational divide when it comes to how Hispanics use and think about the internet. A separate survey conducted by Simmons Research showed varying feelings among Hispanics of different age groups about watching television vs. playing online. For young Hispanic age 18–34, 43.5 percent said they watch less TV on television sets because of the internet, compared with 29.2 among the 35-49 group and 16.7 with those over 50. This probably has something to do with the fact that way fewer Latinos 50 to 64 use the internet (just 67 percent) while their younger counterparts use it at a rate of 90 percent, according to another section from the Pew Research Center study released in 2016.

However, Latinos have had high rates of usage when it comes to other technologies. According to the same Pew 2016 report, Latinos are very likely to “own a smartphone, to live in a household without a landline phone where only a cellphone is available and to access the internet from a mobile device.”

While the percentage of Hispanics who use the internet has continued to rise steadily, adoption rates have risen slower for whites. Between 2009 and 2015, the rate among Latinos rose about 20 percentage points, while for whites it only rose about 8 percentage points.

This digital divide is important because differences between internet usage can very easily translate to disparities in everyday life. The Joan Ganz Cooney Center, which has a series about how low-income families access Federal Communications Commission programs, says, “internet service and digital technologies are critical for accessing a broad range of resources and opportunities.”

In a 2015 report titled “Aprendiendo en casa,” the center examined media as a resource for learning among Hispanic-Latino families. It found that parents believe children develop academic skills from using educational media, but they still want to know more about the media their kids can use.

The report profiled a young girl named Alicia, a 9-year-old of Ecuadorian descent whose name has been changed for privacy reason, who watches YouTube videos both to help her with her math homework and as a resource to teach her how to make dresses and accessories for her dolls. Her mother plays an active role in both these activities. The lesson here? An increase in technological resources can also help bridge the gap between generations.