Future Tense

How Missouri Could Demonstrate What’s Wonderful About Yik Yak

The community doesn’t let bullies and racists rule.

Missouri protest.

Jonathan Butler, a University of Missouri grad student, with fellow protesters on campus, Nov. 9, 2015, in Columbia, Missouri.

Photo by Michael B. Thomas/Getty Images

Both social science research and common sense tell us that anonymity can bring out the worst in people. We might become bullies. We might be openly racist or sexist. We might threaten or provoke. After all, if we’re not accountable to our own reputations, what is preventing us from just being terrible?

The latest example of this comes from the University of Missouri campus, where student protests around campus racial tension recently led to the resignation of the university president. Last week, as evidence for racism in the campus community, the student body president tweeted screenshots of messages from Yik Yak.

If you haven’t heard of Yik Yak, you aren’t on a college campus. Launched in 2013, the social media app spread quickly enough to become a core mode of interaction in many universities. Besides anonymity and transience of identity (there is nothing to tie a single user’s messages together), what set Yik Yak apart were the geographic constraints. Users see anonymous messages (called yaks) within a 5-mile radius of their location. As a local social network, it’s the perfect platform for colleges that often seem like their own little worlds. It also provides a glimpse into the culture of a local community. Sometimes, as in the case of those racist yaks, that glimpse is not a positive one. As is often the case with anonymity, sometimes what we see is bullying, hate speech, and threats. Often we are quick to blame the technology for these problems, and in fact there have been recent calls to ban Yik Yak on campuses. But there are actually indications that even in the most troubling contexts, these anonymous communities are not nearly so terrible as they could be.

Yik Yak 3.

Screenshot from Yik Yak

On Tuesday evening, threatening yaks appeared in the University of Missouri feed. Anonymous users (or perhaps the same one) warned people not to go to campus the next day and that they would “shoot every black person” they saw. By the next morning, a suspect was arrested. And this wasn’t the first time something like this has happened—in the case of imminent threats, Yik Yak has turned over the identities of anonymous users to the authorities a number of times in the past, most recently at Texas A&M University.

Yesterday, Yik Yak spoke up about the situation. Co-founder Brooks Buffington offered one very clear message to its users: “This sort of misbehavior is NOT what Yik Yak is to be used for. Period.” He reminded users that such behavior is against the rules, which are laid out in the app’s terms of service. He also reminded users that they can be suspended for bad behavior and that Yik Yak cooperates with law enforcement in the case of threats. “If you haven’t done so lately, please take a moment to review our Terms,” he wrote.

But telling users to review the TOS isn’t going to do much good. Yik Yak’s look a lot like any website’s TOS. It is a block of text (more than 5,000 words), and much of it is incomprehensible legalese. My research on TOS for user-generated content and social media sites shows that this is pretty typical. At least Yik Yak (unlike most websites I’ve studied) does include some plain-language rules—rules like not posting content that “harms either the feed or your community.” The rules also state that you can be suspended for misbehavior. The rules do not point out that you can be de-anonymized to law enforcement; you would have to read the 5,000-word TOS to learn that part.

Yik Yak 1.

Screenshot from Yik Yak

So these rules and terms probably aren’t what are regulating behavior—and yet something is. I’ve spent a lot of time on Yik Yak, both as a graduate student at Georgia Institute of Technology and now as faculty at CU Boulder. (Yes, your professors are on Yik Yak, sorry!) I’ve also heard from many social computing researchers at different universities about the Yik Yak culture on their campuses. Content like the hate speech and threats from Missouri reported in the media are largely the exception, not the norm. In fact, the Yik Yak feed there is largely back to the typical sex jokes and complaining about classes and the weather, along with positive messages of support. Even the “Don’t go to campus” threat received comments like “If that is a threat it is completely inappropriate” and “Please don’t say things like that.” And current uncomfortable discussion (for example, highly upvoted reminders that nonminorities have problems, too) could be seen as negative but at least gives the students a forum for open conversation about race. It is also providing insight to Missouri administers and faculty that they wouldn’t otherwise have.

Yik Yak 2.

Screenshot from Yik Yak

Though Yik Yak does have inevitable misbehavior, it isn’t nearly as bad as perhaps it should be. One reason for this is that the design of the system encourages formation of social norms by letting users see what material is upvoted and downvoted—and then deleting content that receives more than five downvotes. If you are posting content that the community doesn’t like, it won’t be there for very long. So just like someone learns norms about where it’s unacceptable to light up a cigarette by the dirty looks he or she receives, Yik Yak might users learn that racism is unacceptable in their community because of the downvotes they get. In this way, the users of Yik Yak are the ones with the most power to regulate behavior in their communities.

The stance of many social networking sites has been to leave users to their own devices and if the content is terrible, let them be terrible—or in the recent case of Reddit, to just rid itself of that community altogether. Yik Yak on the other hand seems to be doing its best to encourage users to not be terrible. In addition to the strong statement the company made yesterday, there are algorithms helping out behind the scenes as well. If you write a post that contains threatening language—anything from the word bomb to racial or homophobic slurs—a message pops up asking if you really mean to post it (with a reminder that threats are serious). Yik Yak will also automatically downvote posts that contain names, an attempt to mitigate cyberbullying. Algorithmic downvoting (assuming that the algorithms are good at detecting the right kind of terrible content) could be very effective in encouraging social norm formation since it would appear as if the disapproval comes from the community itself.

If social norms are doing a decent job of regulating behavior on Yik Yak, then it really is up to the individual communities to think about the kind of community that they want to be. It only takes five students to quickly downvote a racist yak in Missouri. And in the meantime, social media companies should continue thinking about ways to design for the formation of positive norms—not just creating rules that no one will read or understand. It isn’t a given that people will be terrible. Sometimes they might surprise you.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.