Future Tense
The Citizen's Guide to the Future

May 11 2017 6:50 PM

Everything You Need to Know About the “Digital” “Catapults” Donald Trump Thinks the Navy Doesn't Need

In an interview with Time excerpted online Thursday morning, president Donald Trump covered an array of topics. None of his statements has proved more baffling, however, than his claims about catapults aboard USS Ford-class aircraft carriers, the first of which should be in service this summer. Yes, catapults. His remarks on the topic are worth quoting in full:

You know the catapult is quite important. So I said what is this? Sir, this is our digital catapult system. He said well, we’re going to this because we wanted to keep up with modern [technology]. I said you don’t use steam anymore for catapult? No sir. I said, “Ah, how is it working?” “Sir, not good. Not good. Doesn’t have the power. You know the steam is just brutal. You see that sucker going and steam’s going all over the place, there’s planes thrown in the air.”
It sounded bad to me. Digital. They have digital. What is digital? And it’s very complicated, you have to be Albert Einstein to figure it out. And I said–and now they want to buy more aircraft carriers. I said what system are you going to be–“Sir, we’re staying with digital.” I said no you’re not. You going to goddamned steam, the digital costs hundreds of millions of dollars more money and it’s no good.

Why, you might reasonably ask, does an aircraft carrier need catapults? And what’s the difference a between steam catapult and a “digital” one? Glad you asked!

You first have to understand that Trump wasn’t talking about medieval siege engines, though you’d barely know that from reading his comments. Instead, the key is in his offhand claim that his interlocutor said something about “planes thrown in the air.” Despite their size, aircraft carriers have relatively short runways. Accordingly, they employ catapult systems to help assist with takeoff.

For decades, the steam-powered launch system that Trump alludes to have been the norm. Despite its antiquated-sounding name, these catapults are complicated mechanisms, not some sort of Victorian holdout. Illumin offers a helpful gloss of the way it all works, but it goes something like this: When they’re preparing for launch, planes are strapped into the catapult, holding them in place, even as their pilots are throttling their engines. When the steam—drawn, as Defense Industry Daily explains, from a carrier’s nuclear reactor—accumulates, it activates pistons in the catapult, removing the restraints and sending the aircraft hurling forward and into the sky. Fwoosh!

It’s an effective system, which is why it’s been employed for so long, but it’s not without its problems. As Defense Industry Daily notes, “The result is a large, heavy, maintenance-intensive system that operates without feedback control; and its sudden shocks shorten airframe lifespans for carrier-based aircraft.” In addition to increasing wear-and-tear, steam-based catapults may not be ideal for future generations of aircraft, some of which may be too heavy for the system to support them.

Enter the Electromagnetic Aircraft Launch System, generally known as EMALS. (Yes, really.) This new system—which is presumably what Trump had in mind when he spoke of “the digital”—really is complicated, but the basic function is simple enough: In fact, it’s reportedly similar to the technology used on rollercoasters. EMALS works by activating a series of motors that pull the aircraft along a track, helping it reach its appropriate speed as it accelerates. The process puts considerably less stress on aircraft, meaning that they can remain operational for far longer. It’s also faster and allows for more precise calibrations, allowing carriers to quickly launch a variety of aircraft

I called up defense expert Peter W. Singer of New America (which is a partner with Slate and Arizona State University on Future Tense) who affirmed the potential benefits of EMALS. “They offer you improved efficiency and are less maintenance intensive. It allows the aircraft carrier to operate more effectively, because the turn-around time is better,” he said. He also pointed out that China will likely incorporate EMALS-type technology into the generation of aircraft carriers after the one they’re currently developing.

“Trump doesn’t know what he’s talking about. That technology is a key selling point of the new aircraft carriers,” Singer said.

One of the most puzzling elements of Trump’s statement is his description of the catapults as “digital” rather than electromagnetic. It seems entirely possible, of course, that Trump—who, as we have long known, understands terrifyingly little about computers—simply flipped the two words. Indeed, his assertion that “you have to be Albert Einstein to figure it out” squares well with his previous claims about hacking and other problems with what he has called “the cyber.”

That said, it’s worth noting that the underlying digital complexity of EMALS is one of its selling points. According to a post on the site Global Security, “Another advantage of EMALS is that it would reduce manning requirements by inspecting and troubleshooting itself.” While it would demand that crews be taught new skills as they learn to interact with the new system, it presumably wouldn’t require whole teams of Einsteins, thanks in part to its self-diagnostic capabilities.

Why, then, would Trump want to abandon this promising new technology just before it goes into operation? If we take him at his word, it’s presumably because the system is pricier than the older alternative. There may be some truth to this claim: The Ford-class carrier program has been dogged by delays and cost overruns. But as the Atlantic notes, “[T]he problems with the Ford-class carrier program are more organizational than technological.” What’s more, Navy Times cites an estimate indicating that EMALS will “save the Navy $4 billion in maintenance costs over the course of the ship’s 50-year lifetime.

Further, Foreign Policy’s Robbie Gramer points out that this proverbial ship has already sailed, even if the USS Ford hasn’t. “Experts say it’s virtually impossible to sort out how to replace the existing EMALS system with the old steam-powered system, and that could cost billions of dollars,” Gramer writes. In other words, Trump’s attempt to save money could cost the military dearly, even as it restricts its capacity for technological development in other ways.

Singer echoed many of these concerns in our conversation. “For people who know this topic of defense acquisitions one of the reasons you get escalating costs is when you change the requirements and designs midstream, and that’s exactly what [Trump’s] proposing to do here,” he told me. Moreover, he noted, “This is inappropriate, and an amazing level of micromanagement that would have Republican defense wonks apoplectic if Obama had done it.”

They don’t seem to be objecting yet.

May 11 2017 3:44 PM

Self-Driving-Car Experts Are Reportedly Trying to Flee Uber Before Google Kneecaps It in Court

Uber’s ongoing PR trainwreck got a little uglier on Wednesday. Software engineers employed by the tech juggernaut are reportedly hunting for jobs outside the company, spooked by rock-bottom employee morale and an ongoing litigation fight with Alphabet, Google’s parent company.

The skittish employees are mostly engineers affiliated with Uber’s self-driving technology division, multiple sources told Recode. They feel threatened by a federal patent-infringement lawsuit that Waymo, the in-house autonomous car unit Alphabet created last December, launched in February against Otto, a self-driving trucking company Uber acquired last year in a deal valued at $680 million. Waymo’s suit alleges that Otto founder Anthony Levandowski stole proprietary self-driving technology while working at Google X, Alphabet’s corporate research lab for so-called “moonshot” projects. According to sworn testimony filed by an Alphabet forensic engineer, Levandowski downloaded 14,000 documents related to lidar, a laser-based, 360-degree radar guidance system Waymo invested in last January to help its self-driving vehicles navigate unpredictable roads. Levandowski left his position heading up Google’s lidar team to start Otto shortly after. Things took a turn for the sinister after Uber bought out Otto just six months after its founding. Apple now alleges that Uber knew about the alleged theft (Levandowski was also a consultant for the ride-hailing company when he left Google) and has filed an injunction to prevent the ride-sharing company from using any of Waymo’s “trade secrets” in its own self-driving vehicles.

The suit is far from trivial. As Wired reported in February, “Mastering lidar is essential to the technological and commercial success of robo-cars.” Full ownership over the technology is critical to Alphabet’s dreams of selling lidar directly to automakers. And despite technical setbacks, Uber partnered with Volvo as part of its autonomous driving tests in Pittsburgh and Arizona last year and has pledged to deliver a road-ready fully autonomous car by 2021. In January, it made a similar deal with Daimler, the German parent company of Mercedes-Benz.

The Alphabet-Uber showdown may now hinge on what Uber knew and when it knew it. As Recode reported late last month, Judge William H. Alsup of the U.S. District Court for the Northern District of California ruled that Levandowski’s Fifth Amendment right to avoid self-incrimination did not protect him from having to turn over a due diligence document Uber prepared in the course of acquiring Otto. The document could prove Levandowski delivered proprietary lidar technology to Uber with the ride-hailing company’s knowledge. Uber, for its part, has maintained its innocence. It claims that its self-driving vehicles employ lidar technology purchased from Velodyne—a supplier Waymo used to work with before bringing its lidar work in-house—and that autonomous-navigation technology currently under development by Uber engineers predated Levandowski joining the company.

And what about those engineers, anyway? If Judge Alsup rules Alphabet’s favor, Uber employees worry they could be left in the lurch as the company’s chances of developing autonomous vehicles evaporate. Alsup’s decision on the injunction is expected sometime later this week. According to Recode, the injunction could stop the Uber’s development of autonomous vehicles in its tracks. At the very least, it could prevent Uber from using any Waymo technology Levandowski’s alleged skullduggery yielded. Even that minimum ruling could spell disaster for Uber. Although the company remains top dog in the ride-hailing world (in terms of valuation it outclasses its closest competitor, Lyft, by nearly a factor of 10), autonomous vehicles are poised to flip the board. “The world is going to go self-driving and autonomous,” Uber CEO Travis Kalanick told Business Insider last August. “So if that’s happening, what would happen if we weren’t a part of that future? If we weren’t part of the autonomy thing? Then the future passes us by basically, in a very expeditious and efficient way.”

So far, Uber’s management hasn’t done much to allay its engineers’ fears. Under intense legal scrutiny, Levandowski distanced himself from the company “through the remainder of the Waymo litigation” in late April. His deputy, Eric Meyhofer, reportedly continues to consult closely with him. Faced with a legal question mark, some Uber engineers are apparently scrambling for an escape hatch. At this point, potential defectors appear limited to the company’s self-driving engineering division, rather than its mammoth engineering department writ large (which employed 1,200 engineers as of January 2016). But even if Uber survives the PR fallout and consumer revolt of losing a high-profile courtroom scuffle with Alphabet, a massive exodus of technical talent could sink the company’s chances of staying competitive.

Even so, nervous employees and corporate litigation are only the latest entries in a bevy of PR nightmares that have beset the popular ride-hailing service of late. As I wrote last week, the Department of Justice has apparently launched a criminal investigation into the San Francisco-based company’s use of software to evade authorities in cities in which it hadn’t been given permission to operate. Uber, currently valued at $70 billion, has also been bleeding top talent: Its president and top communications executive resigned within a month of one another. Those conditions have reportedly poisoned employee morale, compounding self-driving engineers’ wariness about their futures with the company.

What exactly do Uber’s self-driving division engineers do and how might their departure affect the company’s services? I reached out to Uber’s press division by email to ask, but never received a response (Uber declined to comment for Recode’s story as well). But while there’s little information about what life is like for Uber’s self-driving engineers, we do know something about the prevailing culture within the company’s engineering division more generally. Those insights come from a blog post written in February by former Uber engineer Susan J. Fowler, which also detailed the persistent sexual harassment she experienced at the hands of co-workers. Its harrowing, infuriating, yet pretty much industry-standard account of her treatment by the company aside, the post offers a good primary-source look inside Uber’s engineering ecosystem. It isn’t a pretty picture. Fowler joined Uber in November 2015 as a site reliability engineer, a software engineer who designs operations functions. Her post documents fairly routine internecine power struggles within the company’s engineering division, which she characterizes as a shameless “game-of-thrones political war.” As she describes it:

It seemed like every manager was fighting their peers and attempting to undermine their direct supervisor so that they could have their direct supervisor's job. No attempts were made by these managers to hide what they were doing: they boasted about it in meetings, told their direct reports about it, and the like.

Although Fowler lauds Uber employees as “some of the most amazing engineers in the Bay Area,” many couldn’t stick it out in such an environment. “Things didn't get better, and engineers began transferring to the less chaotic engineering organizations” within the company, she writes. “Once I had finished up my projects and saw that things weren't going to change, I also requested a transfer.”

Now, embroiled in a lawsuit whose outcome looks increasingly grave and weighed down by a morose company culture, some restless Uber engineers apparently want out altogether. And who can blame them?

May 9 2017 3:25 PM

The FBI Relied on a Private Firm’s Investigation of the DNC Hack—Which Makes the Agency Harder to Trust  

“When will the Fake Media ask about the Dems dealings with Russia & why the DNC wouldn't allow the FBI to check their server or investigate?” President Trump tweeted on Sunday at 4:15 a.m. How invigorating to discover that, like me, the president also lies awake at night wondering about the mechanics of major data-breach investigations!

Setting aside the nonsensical first half of the tweet, there’s actually an interesting question worth revisiting buried in the second half. Why wouldn’t the Democratic National Committee allow the FBI to check their servers during the investigation of the DNC breaches during the 2016 election?

The DNC maintains there’s a simple answer to this question: According to the group, the FBI never asked to see their servers. But FBI Director James Comey told the Senate Intelligence Committee back in January that the FBI did, in fact, issue “multiple requests at different levels” to the DNC to gain direct access to their computer systems and conduct their own forensic analysis.

Instead, whether because they were denied access or simply never asked for it, the FBI instead used the analysis of the DNC breach conducted by security firm CrowdStrike as the basis for its investigation. Regardless of who is telling the truth about what really happened, perhaps the most astonishing thing about this probe is that a private firm’s investigation and attribution was deemed sufficient by both the DNC and the FBI.

That’s not meant as an insult to CrowdStrike, which is, undoubtedly, a first-rate security firm that does extremely sophisticated and reliable investigative work. Calling in CrowdStrike was a good move on the part of the DNC. I’ve even argued that the DNC should have been relying more heavily on private tech firms to provide its email services and security from the outset. But it’s one thing to trust tech companies to provide email servers and cloud storage and quite another to rely exclusively on them to collect and analyze evidence of a major security incident attributed to a foreign national government.

Good security companies can be invaluable when it comes to helping breach victims figure out where they went wrong and how they can better protect their systems in the future. They can certainly, at times, provide useful assistance to law enforcement investigations—but when they end up essentially doing law enforcement’s job for them, as seems to have been the case with the DNC breach, it becomes exceedingly difficult to know whom to trust and whether to take the results of that investigation at face value. In fact, the president made this point himself, in a Jan. 5 tweet about the FBI investigation, back when he apparently believed the DNC’s version of events: “So how and why are they so sure about hacking if they never even requested an examination of the computer servers? What is going on?”

Knowing who conducted a breach investigation is particularly important when it comes to international cyber conflicts because just about everything the government tells us about those conflicts we are expected to take on faith. Consider the declassified summary of the Intelligence Community’s assessment of “Russian Activities and Intentions in Recent US Elections.”

The DNC breaches feature prominently in that summary but, more to the point, the primary rationale readers are given for why they should believe that the Russian government meddled in the U.S. election is because the FBI, CIA, and NSA believe that to be the case. We are given very little actual detail about what happened or how the incidents were traced to Russia specifically, while we are treated to numerous statements along the lines of: “We assess with high confidence that Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election” or “We further assess Putin and the Russian Government developed a clear preference for President-elect Trump. We have high confidence in these judgments.”

Of course, there are many reasons the Intelligence Community might have decided not to reveal any actual evidence for these claims. But in the absence of that evidence, whether or not you believe their conclusions rests entirely on your confidence in the judgment and investigative abilities of the FBI, CIA, and NSA. And if the evidence that they’ve used to level major accusations at a foreign government comes not from agencies of the U.S. government or direct law enforcement investigations, but rather from private sector firms like CrowdStrike, then the “high confidence” of the government counts for very little. The DNC breach is not the only incident attributed to Russia in the Intelligence assessment summary and it’s likely that some of the others were directly investigated by the government. But even so, this conflation of government- and industry-gathered evidence without clear distinctions makes it harder to take the agencies’ assessments at face value.

Asking private firms to investigate security incidents is often beneficial—it’s possible (likely even) that CrowdStrike has resources and technical expertise that the FBI does not. But turning over an entire law enforcement investigation to the private sector is a serious mistake. Companies have very different agendas and motivations from those of law enforcement agencies—companies want to raise their own profiles, satisfy their clients, and draw new customers, while law enforcement agencies aim to identify criminals and hold them accountable. Especially when the government is going to justify an accusation by urging citizens to trust its judgment, it matters that they have actually conducted an investigation themselves and drawn their own conclusions based on a first-hand examination of the available evidence.

So if the FBI didn’t ask for access the DNC’s servers out of laziness or negligence, it certainly should have. And if the DNC denied them that access for fear of being embarrassed by what they might find, or because they had more faith in CrowdStrike than the FBI, then it served only to undermine confidence in the ultimate results of the investigation and give the impression of having something shameful to hide. Neither the DNC nor the FBI should have been satisfied with an investigation that did not involve the FBI conducting a first-hand look at the compromised systems. And all of us should be concerned about the seeming acceptance of both parties to let a private company singlehandedly carry out an investigation with such significant political consequences.

May 9 2017 2:16 PM

An A.I. Dreamed Up a Bunch of Dungeons & Dragons Spells. They’re Surprisingly Perfect.

Imagine this: You are a skilled sorcerer, part of a mighty group of adventurers exploring an ancient catacomb. You and your allies have hacked and slashed your way through hordes of skeletons and zombies, but now you face the maze’s true master, the deadly lich Azalin. The warriors around you have fallen to his arcane trickery, leaving you alone to fell this dread lord. Wracking your brain, you struggle to remember a spell that might turn the tide of battle. Your powers spent, you realize to your horror that only one remains in your arsenal: Mous of Farts.

This odiferous enchantment isn’t one that you’ll find in any Dungeons & Dragons rulebook. Instead, it comes courtesy of an unlikely source: A neural network—or an algorithm built on the model of human cognition—trained to invent the names of new spells for the venerable roleplaying game. It’s the work of a puckish researcher named Janelle Shane, who explains on Tumblr that she primed the network with a dataset of 365 official spells culled from the franchise’s long history.

Though they’re clearly the work of a computer, the appealingly silly results are just close enough to the real thing that you might not question them if one or two showed up in an old issue of Dragon magazine. Shane told me by email that her favorite creation is “Barking Sphere,” but there are plenty of other winners on the full list. Here are a few of the best:

  • Hold Mouse
  • Gland Growth
  • Wrathful Hound
  • Grove of Plants
  • Conjure Velemert
  • Vicious Markers

Glossing the process, Shane told me, “As the algorithm generates text, it predicts the next character based on the previous characters—either the seed text, or the text it has generated already.” She sets it to a “high temperature,” meaning that it attempts avoid common choices as it selects each subsequent letter. As a result, it tends to make spelling errors and produce nonsense words. When that happens, Shane wrote, “[I]t will often finish the entry with the next-closest phrase from the original dataset.” By way of example, Grove of Plants and Gland Growth might be picking up on the real D&D spell Plant Growth. The result is a list of new spells that approximate familiar ones while leaving room for zany novelty.

Shane has used similar systems to autonomously invent previously undiscovered Pokemon (Quincelax, Tortabool), the name of your next metal band (Sköpprag, Dragorhast), and even “adorable” pickup lines (“You look like a thing and I love you”). The idea for the spell list came to her from a blog reader, and though she doesn’t play the game regularly, she knows enough people who do that she “can get the jokes.”

While those jokes are good, the results also line up surprisingly well with the history of Dungeons & Dragons, partly because its spellcasting system was always fundamentally peculiar. The game’s rulebooks have long been riddled with spells similar to those created by Shane’s algorithm, many of them dumb at best—and likely to get your character killed at worst. Fans occasionally list some of the dopiest offenders: i09’s Rob Bricken, for example, has rounded up 20 of the “most useless” ensorcellments including Snilloc’s Snowball (shoots a single snowball at your enemy), Hold Portal (locks a door), and Guise of the Yak-man (“Self-explanatory,” Bricken writes). While it’s almost possible to imagine some application for these spells, it would take a great deal of luck, and a great deal of creativity, to find the right circumstance for them. You’re typically going to be better off sticking with standbys like Magic Missile or Fireball.

To understand why these spells are part of the game, you have to look to its distant origins, which overlap with the output of Shane’s neural network in striking ways. Some of the goofiest spells have names attached to them—say, Tenser’s Floating Disc—and in many cases those names refer to characters from initial home experiments with the game conducted by its co-creator, Gary Gygax, experiments in which he was often joined by his own children. As Kent David Kelly explains in Hawk & Moor, a meticulously researched history of the game’s early sessions, Otto’s Irresistible Dance—which lands at No. 13 on Bricken’s list—was named for a wizard that they encountered and charmed, a fellow famed for his love of partying. Many other spells have similar origins: They’re silly because they’re the products of a father’s silly interactions with his children.

In other words, Gygax, an improvisatory raconteur, was akin to a parent telling bedtime stories to and with his children; elements of those stories just happened to become canonical details in a game that has since been played by millions. Magpie-minded, he would take bits and pieces of rules from other games and from other players, welding them into constructs that held together, but just barely. As a game designer, then, Gygax himself was a bit like the open-sourced neural network employed by Shane, trying to come up with new ideas from a familiar set of influences. While Shane’s spells may be more consistently comical, they’re not that far off the mark.

Shane, for her own part, is modest when it comes to the value of her project, telling me that it mostly serves to “showcase the adaptability of neural networks.” Nevertheless, there’s something unusually apt about her spell list that gets at—and possibly goes beyond—the limitations of other attempts to engineer “creative” artificial intelligence, many of which tend to leave humans doing most of the work, as I’ve argued before. A Japanese novel “written” by a computer, for example, turned out to have been mostly mapped out by humans—and the A.I.-scripted short film Sunspring would have been gibberish without the emotive performances from its lead actors.

If Shane’s list works better than some other algorithmic inventions, it’s largely because spell names are simpler than novels or scripts, but it also helps that Dungeon & Dragons is necessarily participatory: You can’t, after all, play a roleplaying game without people playing the roles. Simultaneously, Shane’s system captures the recombinatory quality of real creativity, the way we imagine habitable worlds as we contemplate the ruins of dead civilizations. In the process, it arguably suggests a more honest path for artificial intelligence, one that would encourage it to work with us by making it work a little like us.

May 7 2017 10:33 PM

The Macron Hack Was a Warning Shot

Ever since the U.S. election was called for Donald Trump, experts who closely follow the geopolitics of information worried that France would be next. There’s a preponderance of evidence to suggest that Vladimir Putin’s Russia (which has been tampering with political processes in Europe for years) seriously stepped up its game in 2016, using a range of tactics to undermine the centrist status quo represented by the U.K. remaining in the EU and Hillary Clinton’s candidacy in favor of the right-wing white nationalism embodied by Brexit and Donald Trump. We know how that went. And so, since November, many of us have held our breath at the prospect of Marine Le Pen—the scion of France’s leading far-right family, who has documented financial ties to the Kremlin and whose election would have threatened the post–World War II international system—becoming France’s next president.

We can exhale now Emmanuel Macron has soundly defeated Le Pen at the polls, but we were right to clench our chests: Late Friday night, less than two days before the vote, Macron’s campaign announced it had been the victim of a massive hacking attack a few weeks before, the fruits of which were posted to Pastebin. The link was then shared on 4Chan and by WikiLeaks’ Twitter account, among other places. The announcement came just before the start of a legally mandated blackout that prohibits the media from publishing polls, interviews, or other information about the election or candidates for 40 hours before the vote. In a press release, the Macron team said that the leaked cache contained both hacked documents that revealed nothing more than the ordinary function of a political campaign as well as falsified documents.

As Slate’s French sister site reported, France’s National Presidential Campaign Control Commission asked the media to refrain from reporting on the contents of the alleged leaks until after polls closed on Sunday night, and the French media seem to have complied. International outlets and ordinary internet users weren’t bound by the ban, of course, but so far there doesn’t seem to be anything remotely exciting in the 9GB data dump. Surely whoever posted the documents on Pastebin knew that. So why do it?

The timing was particularly odd. It seems unlikely the culprits wouldn’t know about the 40-hour blackout, especially if it was indeed the Russian government, as many are speculating. Whether or not it was meant to actually impact the election, we should take it as a warning to voters and public officials across the world’s democracies that while attempts to tank Macron’s campaign may not have been successful, the information war is far from over. It’s being waged on more fronts than ever, with increasing professionalization (Russia’s defense ministry has a new department of information warfare) and cross-border coordination. The endgame is to spread networked authoritarianism, a political system seen in places like China and Russia that uses the power of the internet to carefully control the expression of dissent in a way that gives the impression of limited freedom of expression without actually allowing dissent to gain traction or challenge kleptocratic elites’ hold on power. The goal is a “managed democracy” for the digital age.

Putin, Trump, and Le Pen are all cut from the same cloth: They’re authoritarian xenophobes whose greatest fear is a functioning free press to keep them in check. Baiting the media with non-news like the contents of hacked emails (petty office feuds! risotto recipes!) pollutes the news cycle with so much drivel that audiences either lose sight of the serious issues at stake, stop paying attention, or become so disillusioned with politics that they start thinking all the candidates are the same, so voting doesn’t matter. Meanwhile, the far-right gets its voters out, takes power, and starts stripping society for parts.

Now that Macron has won, what do we do next? First, do as we French do and have some champagne.

Second, fight like hell for our open societies and for the values and institutions that underpin them, starting with the free, responsible press. Zeynep Tufekci's exhortation to French journalists this weekend not to fall in the same trap the U.S. media did nailed it: Repeating misinformation, even to debunk it, only amplifies it and allows it to gain traction. Unlike insider leaks from whistle-blowers, the real story with these adversarial hacks isn’t usually the content of the stolen documents (Phineas Fisher’s hacks of FinFisher and Hacking Team stand out as exceptions). The fact of the hack is the story—along with who, what, where, when, and why. This doesn’t mean that journalists should ignore the content of hacked documents completely, and the line between leaks and hacks may be blurry at times. No one ever said this would be easy. A good rule of thumb might be: If you received a press release with the information, would it be newsworthy or would you roll your eyes at the idea that some PR department was trying to get you to write such an inane story?

Macron’s electoral victory is only a reprieve in the larger struggle over geopolitics, economics, and contested values like human rights, democracy, and pluralism. It was likely aided by several factors that differentiate France from the U.S. and U.K.’s media and political environments. Notably, the two-round voting system, the existence of multiple political parties, the public funding of campaigns, a shorter campaign season, and the strict regulation of electioneering may have helped make this election more resilient to information warfare. It is worth considering whether some of these features can be emulated in other countries.

Another major difference is that cable news channels and social networks like Twitter and Facebook don’t play nearly as big of a role in French politics as they do in the U.S. or even the U.K. By now it’s conventional wisdom to say that cable news ignores substantive arguments about the issues in favor of obsessing over the campaign circus. Much of what airs on cable news also tends to be so insubstantial, sensationalistic, or blatantly partisan that it pushes would-be voters toward social media as a source of information, which not only contributes to so-called filter bubbles (though how important those really are is disputed) but may also expose them to intentional manipulation by sophisticated big-data analytics firms with ties to the military-industrial complex as well as white nationalist billionaires. (Can you even imagine writing such a sentence before 2016?)

In a world where communication transcends borders faster than the speed of thought, figuring out how to balance the needs of democracy with freedom of expression and access to information may well be the challenge of a generation. Let’s breathe a sigh of relief that the world as we know it won’t fall apart this week, top off that champagne, and get to work. Germany votes in September.

May 4 2017 5:31 PM

Transportation Secretary Elaine Chao Doesn’t Seem to Understand Self-Driving Cars

In an interview with Fox Business on Wednesday, Transportation Secretary Elaine Chao misstated government and industry guidelines related to self-driving cars—which is alarming, given that autonomous vehicles are a rapidly growing sector of the business her own department is charged with regulating.

Chao’s error came in response to a question from host Maria Bartiromo that asked about examples of “modernization” and “innovation” within the American transportation market. Here’s how Chao replied:

We have now self-driving cars. We have Level 2 self-driving cars. They can drive on the highway, follow the white lines on the highway, and there’s really no need for any person to be seated and controlling any of the instruments.

Unfortunately, that last bit’s not quite true. Self-driving cars are indeed on the rise, with companies like Tesla, Cadillac, Mercedes-Benz, Ford, and BMW investing billions of dollars to develop both fully and semiautonomous vehicles.

May 3 2017 5:44 PM

Future Tense Newsletter: What Can We Trust on the Internet?

Greetings, Future Tensers,

Look at your browser’s address bar—do you see a little green lock? If it’s there, that’s a good thing. The symbol, along with the “s” in “https” address, is a widely accepted indication that you have a secure, encrypted connection—one that’s safe to use to make purchases or send private messages. But now, big players like Google are calling into question the system behind those locks. Joshua Oliver explains how the internet giant plans to respond with changes that “will almost certainly make website encryption more reliable.” The move will also “show just how much power Google has to redesign the internet’s critical infrastructure on its own.”

Also on the topic of digital trustworthiness, Casey Fiesler wrote about how the uproar over the web service Unroll.me selling user data to Uber shows that companies don’t really want you to pay attention to their terms of service.

Elsewhere in Future Tense, we’re winding down our Futurography series on synthetic biology. Catch up on the latest by reading about how the emerging field may produce a cure for diabetes, how DARPA’s getting in on the synthbio game, and how new creations are shaking up the biological tree of life. Still hungry? We’ve also got a piece about a CRISPR-edited mushroom that shows that U.S. biotech regulations are woefully out of date, and one about a genetically engineered salmon that shows the difficulty of engaging the public in discussions about synthetic biology products.

Future Tense also hosted an event in Washington, on April 27 about America’s longest war—the one declared against cancer. If you missed it, you can read Emily Fritcke’s recap of the conversation or watch the full event here. You can also read Athena Aktipis’ piece on how cancer became a hallmark of multicellular life—and what that means for the 30 trillion highly cooperative cells that make up our bodies. Kathryn Bowers also wrote for us about how a dog’s brilliant sense of smell might help us sniff out the disease.

Other things we read between compiling our complete collection of Elon Musk’s fairy tales:

  • WikiChina: Ian Prasad Philbrick writes about the Chinese government’s move to create a digital encyclopedia to rival Wikipedia—and why countries like China and Turkey have a penchant for blocking the user-driven information hub.
  • In Denial: Slate’s Susan Matthews argues that Bret Stephens’ debut New York Times column treads into textbook climate denialism.
  • GitHub for Science: Marcus Banks writes about the culture shift academia needs to move from summarizing their results in static scholarly journals to sharing full records of their experiments for the sake of better science.

Not clicking on Google Docs invitations,
Kirsten Berg
for Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University.

May 3 2017 4:19 PM

Do Not Click on Unexpected Emails Inviting You to Edit a Google Doc. It’s a Major Phishing Attack.

If you receive an email with an unexpected invitation to open and view a Google Doc, don’t do it. In what appears to be a large-scale phishing attack, people are reporting that they’re receiving these invitations  from people they know, although they often include “hhhhhhhhhhhhhhh@mailinator.com” in the address.

If you click on “Open in Docs,” it will spam everyone in your Google contacts, and it may also try to steal your information.

While lots of people are reporting that they have received these invitations, journalism organizations and colleges seem to be particularly hard hit.

If you already clicked on the email—as some people on Twitter are sheepishly admitting—you should immediately check what applications you have granted access to your Google account. If you see the Google Docs application listed, remove it, as this tweet describes:

Phishing attacks from Google Docs seem to re-appear every once in awhile, so it’s good to remember to keep an eye out for attacks like this in the future. As a basic rule of thumb, never download an attachment or click on a link that you aren’t expecting.

Want to learn more about protecting yourself online? Check out Future Tense’s “Cybersecurity Self-Defense” package.

Update, May 3, 2017, 5:20 p.m.: Google responded to our request for comment with an official statement: "We have taken action to protect users against an email impersonating Google Docs, and have disabled offending accounts. We’ve removed the fake pages, pushed updates through Safe Browsing, and our abuse team is working to prevent this kind of spoofing from happening again. We encourage users to report phishing emails in Gmail."

Update, May 4, 2017, 11:06 a.m.: Google contacted us with another statement: 

We realize people are concerned about their Google accounts, and we’re now able to give a fuller explanation after further investigation. We have taken action to protect users against an email spam campaign impersonating Google Docs, which affected fewer than 0.1% of Gmail users. We protected users from this attack through a combination of automatic and manual actions, including removing the fake pages and applications, and pushing updates through Safe Browsing, Gmail, and other anti-abuse systems. We were able to stop the campaign within approximately one hour. While contact information was accessed and used by the campaign, our investigations show that no other data was exposed. There’s no further action users need to take regarding this event; users who want to review third party apps connected to their account can visit Google Security Checkup.

Gmail has more than 1 billion monthly users, so the 0.1 percent Google cites could still mean roughly 1 million accounts were attacked.

May 3 2017 12:35 PM

Facebook Is Finally Getting Serious About People Killing Each Other on Live Video

Facebook has apparently decided that it bears some responsibility for the broadcast of the killings, suicides, and sexual assaults that people have been posting on its social network, after all.

CEO Mark Zuckerberg announced Wednesday that the company is adding 3,000 people to its “Community Operations” team and simplifying its process for reporting prohibited activities on Facebook Live and other Facebook platforms. That’s in addition to the 4,500 people already on the team, Zuckerberg wrote. In his Facebook post announcing the move, Zuckerberg acknowledged the recent string of violent videos that have raised questions about the company’s moderation practices:

Over the last few weeks, we've seen people hurting themselves and others on Facebook—either live or in video posted later. It's heartbreaking, and I've been reflecting on how we can do better for our community.
If we're going to build a safe community, we need to respond quickly. We're working to make these videos easier to report so we can take the right action sooner—whether that's responding quickly when someone needs help or taking a post down.

Zuckerberg added that the changes will also make the company better at “removing things we don’t allow on Facebook like hate speech and child exploitation.”

I asked Facebook whether the new jobs would be employees or contractors, whether they’d be in the United States or overseas, and what the pay and benefits might be. The company replied that it had no details to add at this time.

The move comes after a series of widely publicized episodes in which people used Facebook’s video features to gain an audience for violent acts. Last month, for instance, a Cleveland man streamed live video of himself driving around the city as he told viewers he was planning an “Easter Day slaughter” of random strangers. In the end, he uploaded a single video of himself shooting and killing a 74-year-old man.

Facebook, the world’s dominant social network and an increasingly influential source of news and other media content, has come under increased pressure in recent years to exercise editorial oversight of its platform. The company has generally resisted such calls, insisting that it’s a technology company (i.e., a maker of tools) and not a media company (i.e., a curator of content). However, it has always taken at least some responsibility for enforcing standards that prohibit violence, nudity, threats, and the like. This move appears to be consistent with that philosophy.

At the same time, it fits a pattern of recent acknowledgements by Facebook that it needs human intelligence to help better address problems of content moderation, such as the proliferation of fake news. While the company is a leader in artificial intelligence technologies such as language understanding and image recognition, it’s clear that those tools are not sufficiently advanced to differentiate between, say, a clip from an action movie and a video of an actual killing.

Facebook’s apparent determination to address the problem of violence on its video platform is laudable. And it could actually make at least some difference, according to the company. From Zuckerberg’s post:

Just last week, we got a report that someone on Live was considering suicide. We immediately reached out to law enforcement, and they were able to prevent him from hurting himself. In other cases, we weren't so fortunate.

If anything, the move seems belated. As I’ve pointed out, Zuckerberg boasted when he launched Facebook Live that it would invite “raw” and “visceral” content. The company clearly sees live video as essential to its future as it tries to keep its edge over rivals that have appealed more to teens, such as Snapchat. Yet either it somehow failed to anticipate the degree to which the platform would draw disturbed attention-seekers, or it opted to follow its “move fast and break things” credo and worry about addressing such problems later.

Meanwhile, the company’s refusal to describe the jobs it's adding leaves it open to criticism about its labor practices. Reyhan Harmanci for BuzzFeed and Adrian Chen for Wired, among others, have chronicled in depth the emotionally scarring experience of content moderation. Wealthy technology companies often outsource the task to poorly paid contractors overseas. So far, the media attention does not appear to have shamed Silicon Valley companies sufficiently to make better working conditions a top priority.

Facebook also has a pattern of employing temporary human contractors whose work serves as a training model for the company’s own machine-learning software. Those human contractors can be let go either when the company believes the software is sufficiently advanced, or when the humans become a public-relations problem in their own right.

Previously in Slate:

May 2 2017 5:52 PM

Tired of Just Blocking Wikipedia, China’s Government Wants to Create Its Own Online Encyclopedia

After years spent tussling with Wikipedia over issues of internet censorship, China’s government has announced its long-delayed plans to digitize its state-sponsored Chinese Encyclopedia. The print Chinese Encyclopedia has existed since the late 1970s, and the Chinese State Council, the ruling Communist Party’s cabinet, first approved an online edition in 2011. But concerns about the relevance of the encyclopedia in a digital environment reportedly delayed implementation of the digitization project. In April, though, things once again got moving. The Chinese State Council has announced that it will hire more than 20,000 scholars to create the encyclopedia.

Set to go live in 2018, the Chinese government’s effort will comprise more than 300,000 entries authored by Chinese academics and researchers spanning more than 100 different disciplines.

The Chinese Encyclopedia digitization project is a thinly veiled attempt to displace Wikipedia as a source of information for the more than 720 million Chinese netizens whose online activities are already limited by the so-called Great Firewall of China. The state has cyclically banned and reinstated Wikipedia access since 2004, usually in response to tetchy public-relations moments like anniversaries of the 1989 Tiananmen Square protests. Private Chinese internet companies have arisen to fill this inconsistent informational void, but offer dramatically less content than Wikipedia itself.

At an April 12 press conference, Yang Muzhi, the Chinese Encyclopedia’s editor-in-chief, called on the Chinese government to “guide and lead the public and society.”* That echoes comments he made in 2016. According to the South China Morning Post:

In an article in a mainland newspaper at the end of last year, Yang listed Wikipedia as a competitor which required “extra attention”.
“The readers regarded it to be authoritative, accurate, and it branded itself as a ‘free encyclopaedia that anyone can edit’, which is quite bewitching,” he wrote. “But we have the biggest, most high-quality author team in the world ... our goal is not to catch up, but overtake.”

Meanwhile, Turkey blocked Wikipedia over the weekend. Citing state-sponsored obstacles to access, limits on content, and violations of user rights, the think tank Freedom House classified Turkey and 20 other nations (including China) as “not free” in its 2016 Freedom on the Net report. And as a 2014 Human Rights Watch report described:

Turkish authorities have blocked tens of thousands of websites under the country’s draconian Internet Law 5651 over the last few years. The exact number remains unclear since the judicial and administrative procedures for Internet blocking are not transparent. In February, the government passed amendments to the law that expand censorship powers, enabling authorities to block access to web pages within hours, based on a mere allegation that a posting violates private life, without a prior court order.

China, Turkey, and other similarly restrictive regimes’ designs for ever-greater control over Internet access and content may not pay dividends, however. In April, Yang dubbed the Chinese Encyclopedia project a “Great Wall of culture” designed to “guide and lead the public and society” amid what the South China Morning Post described as mounting “international pressure.” However unwitting, the comparison is telling. Although its impregnable legend lives on through wildly ahistorical Hollywood depictions, the actual, brick-and-mortar Great Wall of China was ultimately a poor bulwark against the “international pressure” of its day—threat of foreign invasion. In 1629, the wall was breached by forces hailing from what is today Manchuria, plunging China into a heightened state of civil war that eventually resulted in the overthrow of the Ming Dynasty leaders who had built it.

The good news is that stopgap measures like China’s digitized encyclopedia look less like walls and more like speedbumps on the road toward a more open Internet. Although censorship has increased under the current Chinese President, Xi Jinping, his administration’s January crackdowns on unauthorized internet connections, virtual private networks, and other services to circumvent the Great Firewall are a sign that such shortcuts are working—and increasingly viewed as a threat to Communist Party rule. Digitizing the encyclopedia is also a nod to the increasingly open-access nature of information. “Our goal is not to catch up” to other forms of online content, Yang wrote of the project in a mainland newspaper last year, “but overtake.”

The move may also further mobilize domestic critics of state censorship, whose calls for greater Internet freedom have grown louder in recent months. In March, delegates to the Chinese People’s Political Consultative Conference, a national political advisory board, argued that “broad-brush censorship” by the Chinese government “is hobbling economic growth, breakthroughs in science, technology and innovation, the promotion of Chinese art and culture,” and creating informational divides between young Chinese citizens who reside on the mainland and those who live in the autonomous and less restrictive Hong Kong.

“It is not normal when quite a number of researchers have to purchase software that helps them bypass the country’s firewalls in order to complete their scientific research,” Luo Fuhe, executive vice-chairman of the China Association for Promoting Democracy and a vice-chairman of the CPPCC, reportedly told journalists at the conference.

Even so, the CPPCC’s recommendations were either censored or went unreported by mainland Chinese media, which is heavily vetted by the government.

*Update, May 2, 2017: This blog post was updated to include the fact that Yang Muzhi is the editor-in-chief of the Chinese Encyclopedia.