Facebook was first; now it’s Twitter’s turn in the Russian-interference hot seat. The company was on Capitol Hill Thursday for a closed-door briefing with Senate Intelligence Committee investigators about what role false information spread by Russia-associated Twitter accounts may have played during the 2016 presidential campaign. And now, just in time, there’s also some new data to help us understand the ways those accounts may have targeted U.S. voters.
Twitter, like Facebook, appears to have been used as a tool in a Kremlin-approved campaign to influence U.S. politics, as U.S. intelligence agencies believe. We already knew what kinds of tweets and links were involved: stories from propaganda outlets RT and Sputnik, links to conspiracy-ridden fake news sites with emotionally charged headlines and viral-ready images. According to new research from Oxford University’s Oxford Internet Institute released Thursday, tweets containing links to fabricated news articles, politicized content from Russian outlets, and unverified reports from WikiLeaks appeared to come more often from swing states like Pennsylvania, Florida, and Michigan than they did from the rest of the country. After analyzing more than 22 million tweets sent between Nov. 1 and Nov. 11 that used election-related hashtags, the researchers also found that across the country, such tweets outnumbered tweets containing links to stories by professional news outlets.
While Twitter, with its roughly 328 million users, has a much smaller audience than Facebook, which has about 2 billion monthly users, the social network became a major battleground of the election, in part due to President Trump’s affinity for the platform. Facebook has already provided information to Congress about how propagandists believed to be backed by Russia bought about $100,000 of ads to influence the 2016 election, which the Washington Post reported included ads that supported candidates Bernie Sanders and Jill Stein, as well as ads designed to stoke anger over Black Lives Matter and Muslims. Those ads were targeted to different groups of Facebook users based on their political preferences, apparently to widen divides in an already fraught political climate.
According to a company blog post published after the Senate committee meeting, Twitter shuttered 201 accounts related to the same Russian accounts that distributed political ads on Facebook. In the same blog post, the company explained how the Russian-backed media outlet RT spent more than $274,000 in targeted Twitter ads in the U.S. in 2016, among other new details. The accounts @RT_com, @RT_America, and @ActualidadRT promoted more than 1,800 tweets that likely targeted U.S. users.
The Oxford study, which looked at tweets that were specifically related to the U.S. election, has some limitations—for starters, it only counted tweets that used hashtags. There’s also no real way of knowing how many people actually clicked on the links in the tweets that peddled propaganda and misinformation, only how often they were shared in conjunction with an election-related hashtag.
Furthermore, the study looked at where people’s profiles said they were located. But those accounts could have actually been based elsewhere. Since Twitter lets users view what topics are trending from specific locations, those tweets could have been targeting specific areas—like Florida, for example—to make certain topics trend there. What it all suggests, potentially, is a level of political sophistication to the Russian influence campaign; it appears to have known which geographic areas would be most important to hit. For its part, Twitter says it has been working to combat accounts trying to game its trending topics. Since June, the company says it detects an average of 130,000 accounts per day that are trying to manipulate its Trends feature, but Twitter didn’t clarify what exactly it’s doing to blunt those accounts’ influence.
Our picture of how the alleged Russian influence campaign worked on Twitter keeps getting fuller. An earlier study from the researchers at Oxford also found that bots—automated accounts that tweet without a human pressing the button each time—overwhelmed Twitter during key moments running up to the election, giving a false impression of grass-roots support for Trump. During the third presidential debate, for example, bots sharing pro-Trump–related content outnumbered pro-Clinton bots by 7 to 1. Those researchers found the bots by looking for accounts that tweeted more than 200 times during the data collection period, between Oct. 19–22, and used a debate-related hashtag or candidate mention. And in the timespan between the first and second debates, more than one-third of the pro-Trump tweets were found to come from automated bot accounts.
And there is evidence that Russian-backed Twitter accounts, including bots, didn’t stop working after Trump’s victory. According to data collected by the Alliance for Securing Democracy, a project of the German Marshall Fund that tracks efforts to undermine democratic governments, Russia-linked Twitter accounts—many of which took the shape of fake profiles with borrowed photos—worked to promote and share extremist right-wing tweets and disinformation after the Charlottesville “Unite the Right” Rally. Analyzing a collection of 600 Twitter accounts that are known to be linked to Russia—including openly pro-Russian users, accounts that take part in Russian disinformation campaigns, and automated bot accounts that parrot Russian messaging—the researchers found “PhoenixRally,” “Antifa,” and “MAGA" were among the most common hashtags used by these accounts following the violent rally in Virginia. The campaign of online discord, in other words, seems to have never ceased.
Congress is likely to have a lot more questions for Twitter. But chief among them are probably why Twitter didn’t do more to stop the flow of misinformation during the election, particularly from bots and accounts set up by foreign governments attempting to share false information that could sway voters—and why it isn’t doing more now.