Future Tense

Twitter Could Do a Lot More to Curb the Spread of Russian Misinformation

Protecting democracy means making sure voters have access to reliable information.

While this information is a start, it’s disappointing.

Photo illustration by Slate. Images via RYGERSZEM/iStock; Twitter.

Twitter has a Russia problem. So does Facebook. And Congress isn’t happy with either one. Now Google, Facebook, and Twitter have all been asked by the Senate Intelligence Committee to testify in a public hearing Nov. 1 about Russian interference in the 2016 election.

Expect to see executives grilled by senators who want to know exactly how Kremlin-linked content proliferated on their platforms, what their companies were aware of, and why they didn’t do more to stop it. Representatives from Twitter, for one, already testified in a closed briefing to the Senate on Thursday, but afterward, an unhappy Sen. Mark Warner, a top Virginia Democrat on the committee, called the information shared “inadequate” and “deeply disappointing.” It “showed an enormous lack of understanding from the Twitter team of how serious this issue is,” he said.

We don’t know exactly what happened at that private briefing, but we probably have a pretty good idea because afterward, Twitter published—in the longest blog post in the company’s history—a rundown of what it says it discussed behind the closed Senate doors. The post revealed that Twitter had found 201 accounts that seemed to overlap in some way with the Russian-linked groups that Facebook says spent $100,000 on thousands of political ads to influence American voters in the U.S. election. Of the 470 accounts Facebook shared with congressional investigators that appeared to come from Russian propagandists, Twitter says that 22 had corresponding accounts on Twitter. Another 179 Twitter other accounts appeared linked or related to the Russians.

While this information is a start, it’s disappointing. There’s still so much more we deserve to know. For instance, though Twitter only shared those 201 accounts known to have ties to Kremlin misinformation campaigns, researchers, and academics have found hundreds more, including 600 accounts monitored by the Alliance for Securing Democracy, a project of the German Marshall Fund that tracks efforts to undermine democratic governments.

Twitter didn’t reveal what exactly those 201 accounts were posting, but in the past week, details have leaked about the ads bought by the corresponding Facebook accounts. Those ads  were clearly intended to rile up opposing sides on the brink of America’s polarized politics: Some supported Black Lives Matter, and others that suggested the racial justice movement posed a threat. Some of the Russian-linked Facebook ads were in favor of candidates Jill Stein and Bernie Sanders, but others were pro-Trump, as the Washington Post and Politico reported last week. The corresponding Kremlin-backed accounts on Twitter likely showed the same understanding of America’s deepest political and social divides and engineered messaging to widen those rifts.

Twitter also provided ad buy information for three accounts associated with the Russian-government backed news outlet RT, which a January report from the Office of the Director of National Intelligence called “the Kremlin’s principal international propaganda outlet.” Those RT accounts spent $274,100 on U.S. ads in 2016, most of which were “directed at followers of mainstream media and primarily promoted RT Tweets regarding news stories,” Twitter said. But that’s hardly shocking. Most news outlets spend some money on Twitter, and RT was fast to point out how arbitrary that information is. “Twitter has just unveiled horrendous information in Congress—that we’ve been spending money on our advertising campaigns, just like every media organization in the world,” said RT editor in chief Margarita Simonyan on Thursday. (Sources did confirm to Slate that the tweets from the RT ad buys were shared with Congress.)

Twitter says that it’s been working hard to combat the spread of misinformation on its platform, though experts who have been studying how bots and counterfactual news is weaponized on Twitter say that company could be doing much more.

Take what Twitter has done to fight ISIS use of its network. In March, Twitter released a transparency report that said the company suspended 376,890 accounts for sharing terrorism related content in between July and December 2016, of which only 2 percent were a result of government requests to take them down. Seventy-four percent of the extremist accounts Twitter removed were found by its “internal, proprietary spam-fighting tools,” according to the company.

Sam Woolley, a lead researcher at Oxford University’s Project on Computational Propaganda, says that Twitter and Facebook’s recent interactions with Congress show “these companies … have the capability both through human intervention, but also through machine learning, to track and know who is using their site and for what purposes.”* And if Twitter can find hundreds of thousands of ISIS-related accounts, and differentiate between accounts that are merely discussing ISIS and those that are actually promoting extremism, then it could probably also do a better job of finding Kremlin-linked accounts, especially considering many of them appear to be the work of automated bot accounts.

There’s evidence that Russia actively uses bots on Twitter to stoke political unrest in the U.S. ProPublica reported that one such account took the name of Angee Dixson, a self-described conservative Christian who joined Twitter on Aug. 8, right before the white supremacist rally in Charlottesville, Virginia. Angee fired off some 90 tweets a day as she vigorously defended President Trump’s response to the “Unite the Right” rally and shared pictures that allegedly showed violence on the part of counterprotesters in Charlottesville. Angee sent five tweets in the span of one minute, according to an archive of the account, and all of those contained a link made by a URL shortener and a photo, both of which take time to set up in a tweet. Sending out one tweet every 12 seconds is a strong indication that Angee wasn’t a real person, but rather a software program tweeting on someone else’s behalf. Even the account’s profile picture was stolen: ProPublica linked it to a photo of a model that at one point was rumored to have dated Leonardo DiCaprio. Though it’s hard to directly connect any one bot to its source, “Angee’s” tweets reportedly used similar language from Russian government–backed outlets Sputnik and RT.

Twitter doesn’t need to ban bots altogether. But it could do better at flagging accounts that are highly automated. “It seems like a pretty clunky way of doing it,” says Woolley, who notes that all kinds of people and groups schedule and automate posting tweets. “But it’s akin to Wikipedia’s method of having a flag at the top of an article that says this article needs to be improved upon for various reasons.” During the third presidential debate in October 2016, Woolley and his Oxford colleagues found that pro-Trump bots were tweeting with debate-related hashtags seven times more than pro-Clinton bots. (The team determined that the accounts they were monitoring were highly automated because they tweeted more than 200 times with debate-related hashtags or a candidate mentions in a period of four days.) Those fake accounts give the false impression of a groundswell of grass-roots support for a politician or topic, and that can confuse real voters legitimately trying to understand America’s complicated and increasingly fraught political climate.

Another thing Twitter could do to combat disinformation, according to Woolley, is to analyze the content of what’s being tweeted. It wouldn’t be difficult, for example, to create a graph of the network and assign a score to particular words or phrases or locations. So, if an account is only tweeting about Donald Trump all day and all night without rest, and then upon further inspection Twitter learns the account is based in Moscow, that might warrant being flagged as a bot. If the analytics tools were sophisticated enough, Woolley says they could very likely differentiate between a legitimate bot—say, one that tweets about traffic conditions—and a malicious one.

The point here is that social media companies, like Facebook and Twitter, could have done much more to protect Americans and the integrity of the 2016 presidential election from harmful actors. Twitter has taken steps to support democracy in the past. During the Green Movement in Iran in 2009, Twitter worked to uphold service to help people communicate amid extreme government censorship. Twitter, Facebook, and other Silicon Valley giants can work to enable and protect democratic processes. That means not only weeding out bots, but also participating in congressional investigations.

There’s still a lot we don’t know here, like how many people clicked on the links shared by those accounts or what the tweets said. And those accounts likely represent only a small fraction of Russian-backed activity on Twitter intended to aggravate American political divisions. Facebook, for its part, shared on Monday that it is handing over the 3,000 Russian-linked ads it found over to congressional investigators.

Expect more come Nov. 1 when executives from Facebook, Twitter, and Google will all be in the Senate hot seat. Until then, it’s probably safe to expect more details will either be leaked to the press or released voluntarily by these companies. Now that Facebook and Twitter have both provided evidence that Russians used their products to undermine the U.S. elections, the public deserves to know exactly how it happened.

*Correction, Oct. 6, 2017: This article originally misspelled Samuel Woolley’s last name. (Return.)