Twitter suggests users tweeting about police shootings follow the NRA.

Why Do Social Media Platforms Suggest We Follow the Wrong Accounts After a Tragedy?

Why Do Social Media Platforms Suggest We Follow the Wrong Accounts After a Tragedy?

Future Tense
The Citizen's Guide to the Future
July 8 2016 9:58 AM

Why Do Social Media Platforms Suggest We Follow the Wrong Accounts After a Tragedy?

Screen Shot 2016-07-08 at 8.52.37 AM
What’s going on here?

Twitter

Social media has become central to the ways we experience and process tragedies of all kinds, making collective phenomena feel personal. Events that would have once been abstract or distant instead feel increasingly immediate, visceral, and—most importantly—human. More than a decade into the social moment, though, the platforms themselves are still struggling with this capacity for connection—as Facebook’s muddled response to streaming video of police shootings shows. But these companies’ true fight may not be with their users, but with the algorithms that attempt to bring those users together.

On Thursday, in the wake of the fatal shooting of Philando Castile by the police, activist and artist Maira Sutton found that when she searched for information about Castile on Twitter, the site suggested she follow the National Rifle Association’s official account.

Advertisement

While Sutton and others, such as the columnist Trevor Timm, initially suspected this was an ad buy from the NRA, evidence suggested otherwise. Twitter does allow its users to pay to promote their accounts, but those accounts are always clearly labeled as such, according to the company’s official explanation. Since there was no “Promoted” tag on the NRA profile link in Sutton’s screen capture (and no corresponding option to dismiss the suggestion, as there normally would be with an advertisement on Twitter), it seems likely that the recommendation arose more organically.

What occurred, then, is probably much simpler, but no less troubling. As at least one other Twitter user proposed to Sutton, the suggestion likely cropped up because many on the site were tagging the NRA in their own tweets about Castile. That’s understandable, especially given the NRA’s poor record on social media responses (or lack thereof) to shootings. Noting that the two terms often appeared in concert, Twitter automatically suggested them to users who didn’t employ them together, on the assumption that they too might be interested in the connection—a connection that no human presumably reviewed or reflected upon.

Sutton would go on to observe that this possibility was—if anything—arguably worse than a simple ill-considered ad placement, writing that it was “a great example of how terrible Twitter’s algorithms are.” Timm also affirmed the same point, calling it a product of “Twitter’s awful algorithm.”

Twitter, for its part, hasn’t been especially transparent about how those algorithms work, though it is hardly alone among social media companies in its black-box approach to automated interaction. On its help page about account suggestions, the company discusses its approach in the broadest terms: “We may make suggestions based on your activity on Twitter, such as your Tweets, who you follow, accounts you interact with, and Tweets you engage with.” It further acknowledges that since the “suggestions are generated by algorithms … you may or may not know the accounts or find them relevant.”

Advertisement

A Twitter spokesperson said over email, “In search results, we show accounts that are relevant to the query. We determine what accounts to show based on several signals. For example, if numerous Tweets with the search term also mention a certain account, this account may appear.” Though this further explains how this happened, it does little to account for the lack of sensitivity exhibited by the pairing.

It is clear, however, that Twitter isn’t the only offender on this front—or the worst. On Facebook, for example, Slate senior editor Gabriel Roth found that when he started posting about Brexit—about which he had extensively written—the site began recommending that he “like” white supremacist, libertarian, and men’s rights pages, positions distant from his own, however much they might be entangled with contemporary British politics more generally. (Facebook did not respond to a request for comment.)

Such examples speak to the fundamental awkwardness of social media algorithms, blunt instruments passing themselves off as precision tools. Last year, David Auerbach explored how Facebook might encourage you to friend an ex you never want to speak to again by mining your phone contacts, a system similar to the one Twitter uses for some of its suggestions. Because they, like other contemporary forms of artificial intelligence, lack anything resembling emotional intelligence, the algorithms—powerful as they may be—give little consideration to our human experience of their suggestions.

Trying to correct for algorithmic ineptitude can go bad, too, of course. Facebook famously ran into trouble earlier this year when reporting by Gizmodo suggested that its “Trending” section reflected the liberal political biases of the company’s employees. While Facebook has subsequently worked to allay that situation, these suggestion mishaps indicate that social media companies might do well to dial up their human sociopolitical sensitivity just a little higher.

Twitter at least seems to be aware of the problem—and it appears to be grappling with its own systems accordingly. On Thursday, one of the company’s official accounts tweeted, “We’re sick of seeing names trend because they were killed brutally and unjustly.” We are too.

Future Tense is a partnership of SlateNew America, and Arizona State University.