Over the past several years, video game enthusiasts have gained a reputation for cultivating a culture of casual nastiness, lodging virtual rape threats at one another and lobbing lewd comments into the void—a norm that disproportionately affects the outnumbered women in that world. But as Laura Hudson describes in an excellent piece in Wired this month, that dubious achievement has actually inspired many gaming communities to take a harder look at their harassment problem than other online spaces have. Now, while massive Internet communities like Twitter lean back with a laissez-faire approach, these gaming companies are leading the field in addressing online harassment by lending resources to research the problem, innovating new techniques to solve it, and testing them out in the crowd.
Hudson focuses largely on the work of Riot Games, the publisher of League of Legends (a game whose players were, as of 2012, more than 90 percent male). Riot realized two years ago that “a significant number of players had quit the game and cited noxious behavior as the reason.” So it launched a “player behavior team,” employing “staff members with PhDs in psychology, cognitive science, and neuroscience to study the issue of harassment by building and analyzing behavioral profiles for tens of millions of users.” Riot found that nastiness was a communitywide problem: “If we remove all toxic players from the game, do we solve the player behavior problem? We don’t,” said Jeffrey Lin, Riot’s lead designer of social systems. Persistently bad-behaving players only produced 13 percent of the harassment on the site; the remainder of harassment was lodged by “players whose presence, most of the time, seemed to be generally inoffensive or even positive.” The takeaway: “Banning the worst trolls wouldn’t be enough to clean up League of Legends. … Nothing less than community-wide reforms could succeed.”
To remedy the problem, Riot switched off its default feature that allowed opposing players to chat with each other during games. Asking players to opt-in to chatting instead led to a 30 percent drop in harassing behavior and a 35 percent rise in positive interactions. Riot also found a way to reduce recidivism rates among its harassing users by clearly spelling out its justification for suspending users over offensive comments instead of just quietly banning them, which led to a rash of apologies from players, many of whom said they didn’t think before they spoke or didn’t fully understand the impact of their comments. (Riot says it’s since lifted 280,000 gamers from offender status to upstanding members of the community.) And it put the responsibility for policing this behavior into the community’s hands, creating a “Tribunal” of fellow players who vote on reported incidents of harassment and decide whether they constitute true offenses. While this sort of system is not perfect—is a community that’s inundated with casual cruelty really equipped to recognize that behavior as damaging?—Riot found that its experts concur with the Tribunal’s decisions almost 80 percent of the time. Also testing out new strategies for attacking the problem is Xbox, which uses player feedback to determine “whether a user gets rated green ('Good Player'), yellow ('Needs Improvement'), or red ('Avoid Me'),” giving other players the tools to get around interacting with abusive members.
These experiments provide a few lessons for other online communities that haven’t been so successful in establishing positive community norms. The first is that establishing positive community norms is important! As gamer, cultural critic, and recipient of extreme sexist harassment Anita Sarkeesian notes, at Twitter, vile sexist comments like “I hope you get raped” are often not even treated with a slap on the wrist; they’re ignored completely by a system that only sees criminal threats as worthy of moderation. This is a bizarre standard—as Hudson points out, if a person yelled “I hope you get raped” in an office or crowded restaurant, they’d be asked to leave, and the Internet is a community like any other.
Another takeaway is that, as Hudson puts it, “Creating a simple hurdle to abusive behavior makes it much less prevalent.” If users have a clear incentive to play nice, they stay in line. But to me, the most important aspect of Riot’s approach is that it involves dealing directly (and even compassionately) with perpetrators, victims, and bystanders of this harassment. Many online communities are focused on attempt to appeasing abused users with personal muting or blocking buttons (thus freeing harassing members to go on to harass other users). Others take a hardline approach where users are either 100 percent welcomed or else forever banned with no wider discussion. But Riot’s experience shows that a community problem responds best to a communitywide solution.