Twitter blocklists: They can stop harassment—and they can create entirely new problems.

Blocklists Can Filter Out Harassment Online. And They Can Lead to Something Just as Ugly.

Blocklists Can Filter Out Harassment Online. And They Can Lead to Something Just as Ugly.

Decoding the tech world.
Aug. 11 2015 2:24 PM

Beware the Blocklists

They can be useful, imperfect tools for filtering out harassment online. And they can be just as troublesome as the problem they purport to fix.

Twitter blocklists
Oblivious victims of corporate blocklists?

Photo illustration by Juliana Jiménez. Photo by Kacper Pempel/Reuters

1_123125_2267723_futuretense_logo_allabbrevoneline

Someone must have been telling lies about David A., for without having done anything wrong he was blocked one fine morning.

The first line of Kafka’s The Trial came into my head at the end of July when I pulled up the Twitter account of the in-progress Open Source Convention, or OSCON, a five-day gathering encompassing all aspects of free and open-source software, and found that it had blocked me. I normally shrug off occasional blocks as an inevitable consequence of human interaction, especially on Twitter, but OSCON is a thousands-huge event including speakers from everywhere from Google to Netflix to Uber to Cisco. Given that OSCON is relevant to my work—and because I’d never tweeted at or about OSCON prior to that—I decided to get to the bottom of things. I never did.

David Auerbach David Auerbach

David Auerbach is a writer and software engineer based in New York, and a fellow at New America.

Advertisement

In the social morass that is Twitter, blocking and blocklists, which prevent people from seeing your tweets and appearing in your notifications, have become an increasingly popular tool to screen out noise, idiocy, and outright harassment. Whether it’s some throwaway account jumping on you with misspelled racial slurs or political cranks coming after you for agreeing with them only 97 percent of the time, Twitter can sometimes feel like a tragedy of the commons repeated as farce. I’ve argued before that Twitter’s social-interaction model is fundamentally broken and that nothing short of a paradigm shift will make the network more hospitable and conducive to healthy human interaction. Until that happens, we are stuck with half-measures like blocklists, which sometimes encourage the very sort of bullying and hostility they’re supposed to prevent.

Blocking is a clumsy online equivalent of something humans do every day, which is shunning. Our social code allows groups of people to tune out interlopers whom they perceive as outsiders (for whatever reason), until the outsiders get the message and go away. Text-based Internet communication makes that harder, both because of the nuance we lose in, say, 140 characters and because so much social media communication goes ignored in the first place. In person, a group of five people can make it clear to someone that he isn’t welcome in a few minutes; on the Internet, that interloper may never figure it out. Add in trolls and assorted troublemakers, and clearly we are working with an inadequate set of Internet social signals. So when someone blocks you without clear cause, it’s fair to wonder, “What did I do?”

The answer, in general, is to ignore the question. The blocker doesn’t want to speak to you, and because she didn’t care to tell you why, you might as well quash your curiosity and move on. With OSCON, though, I wanted to know. It turned out OSCON had never intended to block me—or many of the other people it ended up blocking. I was able to get in touch with Joshua Simmons, a community manager for OSCON’s parent O’Reilly Media, who kindly took the time to explain what had happened. According to Simmons, during the conference OSCON had found itself dog-piled on Twitter in response to a controversial speaker. With the noise on the #OSCON hashtag reaching a fever pitch, Simmons made temporary use of two third-party blocklists, each with more than 10,000 accounts on it, which were blocked en masse by OSCON’s account. The Web app Block Together provides the infrastructure to share, export, and import blocklists among Twitter users, which is what OSCON had done. (Twitter added its own mechanism for importing and exporting blocklists this summer.) “Because I used the blocklist I was able to see through the brigade and retweet actual conversations to help surface them for attendees,” he told me. Unfortunately, the blunt instrument of the blocklists caught me and a number of other puzzled people in its path, an unintended effect. And because these blocklists are shared among people and companies, I wanted my name off whatever list that it was on. If OSCON had used a list with me on it, would other tech organizations use it, too?

There was only one problem: I wasn’t on any big blocklists published via Block Together, at least none I could find. One of the lists Simmons used, the anti-harassment “Good Game Autoblocker” list, was public and allows “appeals” to be removed from it, but I wasn’t on that one. The second list, however, was private and, despite some inquiries, remains of unknown provenance. At the suggestion of Simmons, I wrote to Block Together’s email list, to ask about the private blocklist. Shortly thereafter, several people on the list publicly attacked me on Twitter for asking stupid questions and for other spurious reasons (I blocked them, obviously), and I decided I’d peeked far enough down this particularly ironic rabbit hole.

Advertisement

I did, however, speak with the creator and maintainer of Block Together, Jacob Hoffman-Andrews. He clarified that the blocker-bullies I encountered didn’t speak for Block Together but merely watched its (public) mailing list. (Needless to say, do not email the Block Together mailing list, unless you’re fond of public ridicule.) Block Together, he explained, is a tool for sharing blocklists between Twitter users but did not create or maintain its own blocklists. Hoffman-Andrews, who by day works as a senior staff technologist at the Electronic Frontier Foundation, wasn’t aware of any secret blocklist that I was on, but he did say, “I keep an eye on how Block Together is used in practice, to see how I can make it work in ways that I think are more positive,” and told me he plans to improve the tool to make it easier to see whom you’ve blocked when you adopt a third-party blocklist.

At the moment, however, blocklists with tens of thousands of names remain a problematic tool, precisely because of the scenario I encountered: Few, and sometimes not even their creators, know who exactly is on them or why they are on there. When you subscribe to a blocklist, you are embracing someone else’s criteria for what is block-worthy, and as Simmons found out, that criteria may not line up with your own—nor will you necessarily even know what those criteria are. An individual block is nothing but an expressed desire not to communicate with someone, but a shared blocklist promotes the exclusion of a person—a social statement. That’s not bad in and of itself—in looking at some of the public lists, which are still difficult to locate given that there is no central directory for them, I saw many accounts that I’d be happy to block were I ever to run into them—but I saw little to suggest that the blocklists in play now are curated with any degree of transparency or logic. They are ad hoc assemblages, blocking both good and bad.

For now, I continue to block or mute trolls and other irritants by hand, and I don’t share my blocklist with anyone else. If you decide to use a third-party blocklist, it’s best if you personally trust the judgment of those who curate it. Organizations, however, are better off avoiding them, or else they run the risk of inadvertently alienating innocents, supporters, and acerbic tech columnists. As Simmons said, “I wouldn’t use them as an organization because it’s too wide of a net. But as an individual target, I’m OK with that compromise.” After OSCON ended, Simmons turned off the blocklist and told me he doesn’t plan to use it again.

Online filtering and moderation have been thorny problems since the days of Usenet killfiles, which contained a list of users whose newsgroup messages were to be automatically trashed. But short of taking their accounts private, Twitter users don’t have a lot of options to reduce noise at this moment. Twitter is apparently working on filtering notifications to reduce the levels of chaos, but this is a difficult task. With blocklists proving an unsatisfactory salve even among their proponents (I see many blocklist users still complaining of being harassed, as though they’re bailing water from a sinking ship), users still face a trade-off between an unruly and often hostile public commons like Twitter or Reddit and a walled garden like Facebook or Slack. These models are fundamentally at odds with one another, and if there is a way to reconcile the best parts of each without the flaws of either, it will not come out of a centralized network where the powers of content and user adjudication are arbitrarily placed into the hands of a few corporate and organizational entities, whether it’s Twitter or Block Together.

Imperfect solutions are sometimes the only ones available. But for now, beware strangers bearing blocklists.

This article is part of Future Tense, a collaboration among Arizona State UniversityNew America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.