The lesson of the dust-up over Trump’s fake Twitter followers.

The Most Important Lesson From the Dust-Up Over Trump’s Fake Twitter Followers

The Most Important Lesson From the Dust-Up Over Trump’s Fake Twitter Followers

The citizen’s guide to the future.
June 2 2017 11:29 AM
FROM SLATE, NEW AMERICA, AND ASU

The Most Important Lesson From the Dust-Up Over Trump’s Fake Twitter Followers

We need more transparency from social networks.

170531_FT_Trump-Twitter-Bot-Followers
But how many are bots?

Photo illustration by Slate

In January, White House press secretary Sean Spicer said that Donald Trump drew “the largest audience to ever witness an inauguration,” despite photos that showed the crowd to be much smaller those of predecessors’ inaugurations. Later, Trump and surrogates said that crowds protesting Republican members of Congress and the embattled travel ban were filled with people paid by opponents.

This administration’s obsession with the numbers also translates to social media support. Trump has consistently pointed to his Twitter following as an example of hidden support not reflected in polls. Earlier this week, however, reports began to surface about millions of fake and bot accounts suddenly following @realDonaldTrump on Twitter. This unusual behavior prompted quick condemnation, with multiple articles suggesting the nefarious intent of Trump or his allies. Some suggested that this manipulation of follower numbers was intended to give the president the appearance of a much more robust grassroots backing than actually exists. Newsweek used the opportunity to dust off an old point of contention surrounding Trump’s Twitter following—that half of said “followers” are actually fake. Other news outlets and fact checkers quickly poured cold water on the story, with BuzzFeed pointing out, “Trump is not the only one experiencing an influx of potential bots. Other large accounts like Hillary Clinton, Barack Obama, and even Justin Bieber also have many new followers that appear to be fake Twitter users.”

Advertisement

Bots are a fact of social media life. If you’re on Twitter, they’re following you, too. However, for most accounts, only a small percentage of followers are fake. And of course, very few users can effectively claim that perceptions of their popularity could have a marked effect on public opinion. It’s different with Trump. But it’s also very difficult to determine whether new bots that suddenly follow Trump and other politicians are politically motivated—let alone who is behind those accounts that are purposefully manipulative.

Let’s be clear: Coordinated campaigns of misinformation and manipulation on social media are absolutely real and are becoming an increasingly prominent component of the online media landscape. A variety of state and nonstate actors are increasingly flexing their muscles on these platforms to achieve a range of propaganda ends around the world. Swarms of bots have been used to disrupt dissident activists in places like Turkey, Mexico, and Syria, and dedicated Russian psy-ops and cyberattacks certainly played a role in the 2016 U.S. election. This is a real threat, and one that bears a much closer look by society as a whole.

But, at the same time, this week’s story reveals another key truth about these emerging threats and the social media platforms on which they find success: The opacity of platforms like Twitter and their continued unwillingness to provide critical data to journalists and researchers makes it even more difficult to determine where campaigns of misinformation are emerging and who is behind them.

While the sudden boost of these fake accounts is suspicious, their actual origins and purpose are a matter of conjecture. We know that the follower count for a given account has changed, we have a list of those new followers, and we have a rough sense of the behavior of those accounts in ways that are indicative of whether they might be fake identities or bots.

Advertisement

But that is all the information we have, and all we are likely to get without a leak from a builder or the help of a platform. There is no place to download data about these accounts or quickly find any information about, say, the IP addresses or other registration details associated with them. We also can’t effectively compare them in a real, quantitative way against other campaigns of misinformation that we’ve seen in the past. This limits our ability to connect this particular situation to other things that we’ve seen before or that may be occurring on Twitter or other social media platforms at the same time.

There are lots of reasons why an account might see massive boost in follower count. It could be the unrelated actions of a large botnet simultaneously following a whole list of celebrity accounts on Twitter. It could be a coordinated action on the part of the administration or its allies for some persuasive end. Maybe someone is seeking to discredit belief in the president’s grassroots following by very obviously increasing the number of fake followers. It might just be a fixed glitch in counting and providing accurate follower numbers for celebrity accounts producing the appearance of a sudden increase where there wasn’t one at all. Understanding the cause here isn’t just about Trump’s ego. It has massive political and societal import. But because social media platforms offer the public limited information about what’s happening behind the scenes, we have no way to ascertain when one is happening, and how it is happening. We can’t hold those engaging in unethical behavior to account, and we can’t create new mechanisms to challenge it. We are left with conjecture and accusations that raise the level of fear and suspicion in these online spaces.

Platforms need to take action to provide better public transparency on these issues. Transparency reports from companies like Google have massively expanded how researchers and the public alike understand issues like privacy, national security, and copyright. Along those same lines, social networks should be providing transparency reports about the kinds of misinformation campaigns they are seeing on their platforms. What the public needs to know is: Is a campaign of misinformation currently underway? Where is it originating from? What sorts of messages is it delivering, and how is it changing over time? Are there indications that these are state actors, or do they appear to be unaffiliated, private groups? These facts, and the underlying data that would support these determinations, would help protect online discourse and—more broadly speaking—democracy itself.

You can hypothesize about why platforms have declined to provide this level of transparency so far. Perhaps they fear revealing the extent and frequency of these campaigns occurring on these platforms, and the level of mistrust that it might engender in users. Perhaps they worry that revealing certain types of techniques or exposing the complicity of certain governments or actors may motivate calls for those platforms to create solutions to these challenges. Perhaps they are concerned—rightly—about setting harmful precedents about exposing the privacy of users to the public at large.

It will be a complex balancing act, but it’s still important to try. Unfair competition in the marketplace of ideas harms everyone. Bots, fake accounts, and active campaigns of distortion and misinformation erode the open participation and democratic discourse that made these social media platforms powerful and so promising when they began. Today, the lack of transparency enables ever escalating and more virulent rounds of accusation and recrimination, with no way to study these manipulative campaigns when they occur, or even ascertain when they occur at all.

The public needs to be able to hold people accountable when they attempt to leverage these channels of mass communication in deceptive and corrosive ways. So our message to social media platforms is simple: Take responsibility. The public urgently needs the light you can shed.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.

One more thing

You depend on Slate for sharp, distinctive coverage of the latest developments in politics and culture. Now we need to ask for your support.

Our work is more urgent than ever and is reaching more readers—but online advertising revenues don’t fully cover our costs, and we don’t have print subscribers to help keep us afloat. So we need your help.

If you think Slate’s work matters, become a Slate Plus member. You’ll get exclusive members-only content and a suite of great benefits—and you’ll help secure Slate’s future.

Join Slate Plus

Tim Hwang is a writer working at the intersection of technology, law, and public policy. Formerly, he led the Intelligence and Autonomy initiative at Data & Society. Follow him on Twitter.

Samuel Woolley is the director of research of the Computational Propaganda Project at the University of Oxford and a fellow at Google Jigsaw.