Future Tense

Facebook Is Finally Getting Serious About People Killing Each Other on Live Video

CEO Mark Zuckerberg says the company needs to do better after a string of episodes in which users broadcast killings, suicides, and other violent acts.

Photo by Justin Sullivan/Getty Images

Facebook has apparently decided that it bears some responsibility for the broadcast of the killings, suicides, and sexual assaults that people have been posting on its social network, after all.

CEO Mark Zuckerberg announced Wednesday that the company is adding 3,000 people to its “Community Operations” team and simplifying its process for reporting prohibited activities on Facebook Live and other Facebook platforms. That’s in addition to the 4,500 people already on the team, Zuckerberg wrote. In his Facebook post announcing the move, Zuckerberg acknowledged the recent string of violent videos that have raised questions about the company’s moderation practices:

Over the last few weeks, we’ve seen people hurting themselves and others on Facebook—either live or in video posted later. It’s heartbreaking, and I’ve been reflecting on how we can do better for our community.

If we’re going to build a safe community, we need to respond quickly. We’re working to make these videos easier to report so we can take the right action sooner—whether that’s responding quickly when someone needs help or taking a post down.

Zuckerberg added that the changes will also make the company better at “removing things we don’t allow on Facebook like hate speech and child exploitation.”

I asked Facebook whether the new jobs would be employees or contractors, whether they’d be in the United States or overseas, and what the pay and benefits might be. The company replied that it had no details to add at this time.

The move comes after a series of widely publicized episodes in which people used Facebook’s video features to gain an audience for violent acts. Last month, for instance, a Cleveland man streamed live video of himself driving around the city as he told viewers he was planning an “Easter Day slaughter” of random strangers. In the end, he uploaded a single video of himself shooting and killing a 74-year-old man.

Facebook, the world’s dominant social network and an increasingly influential source of news and other media content, has come under increased pressure in recent years to exercise editorial oversight of its platform. The company has generally resisted such calls, insisting that it’s a technology company (i.e., a maker of tools) and not a media company (i.e., a curator of content). However, it has always taken at least some responsibility for enforcing standards that prohibit violence, nudity, threats, and the like. This move appears to be consistent with that philosophy.

At the same time, it fits a pattern of recent acknowledgements by Facebook that it needs human intelligence to help better address problems of content moderation, such as the proliferation of fake news. While the company is a leader in artificial intelligence technologies such as language understanding and image recognition, it’s clear that those tools are not sufficiently advanced to differentiate between, say, a clip from an action movie and a video of an actual killing.

Facebook’s apparent determination to address the problem of violence on its video platform is laudable. And it could actually make at least some difference, according to the company. From Zuckerberg’s post:

Just last week, we got a report that someone on Live was considering suicide. We immediately reached out to law enforcement, and they were able to prevent him from hurting himself. In other cases, we weren’t so fortunate.

If anything, the move seems belated. As I’ve pointed out, Zuckerberg boasted when he launched Facebook Live that it would invite “raw” and “visceral” content. The company clearly sees live video as essential to its future as it tries to keep its edge over rivals that have appealed more to teens, such as Snapchat. Yet either it somehow failed to anticipate the degree to which the platform would draw disturbed attention-seekers, or it opted to follow its “move fast and break things” credo and worry about addressing such problems later.

Meanwhile, the company’s refusal to describe the jobs it’s adding leaves it open to criticism about its labor practices. Reyhan Harmanci for BuzzFeed and Adrian Chen for Wired, among others, have chronicled in depth the emotionally scarring experience of content moderation. Wealthy technology companies often outsource the task to poorly paid contractors overseas. So far, the media attention does not appear to have shamed Silicon Valley companies sufficiently to make better working conditions a top priority.

Facebook also has a pattern of employing temporary human contractors whose work serves as a training model for the company’s own machine-learning software. Those human contractors can be let go either when the company believes the software is sufficiently advanced, or when the humans become a public-relations problem in their own right.

Previously in Slate: