Future Tense

What Facebook Can Learn From Craigslist

The humble Craigslist has tackled its own content moderation problems—and is succeeding.

Photo illustration by Slate. Photos by Paul Marotta/Getty Images and Dimitrios Kambouris/Getty Images for Time.
Craig Newmark, left, and Mark Zuckerberg

Photo illustration by Slate. Photos by Paul Marotta/Getty Images and Dimitrios Kambouris/Getty Images for Time.

When social media sites first started to take hold, naysayers complained about the banality of documenting everyday life, lamenting lunchtime photos and outfit choices. Those concerns might seem quaint in comparison to Twitter threats against female journalists and Facebook Live broadcasts of hate crimes, suicides, and murder. Social media companies would doubtless prefer a world where users behave and content goes unmoderated, but as the 2016 U.S. presidential election made clear, it’s no longer possible for platforms to claim political neutrality. Proposed solutions are as varied as the offensive content itself. In some cases, people object that content is left up too long, as when videos of suicide might trigger others. In other cases, companies remove content because of obscenity, ignoring their educational or political capacity. The biggest issue is simply the lack of clarity around what gets taken down, what gets left up, and who exactly gets to make those calls.

The reluctance to make tough choices about moderating content is one reason for criticism that social media companies are essentially adolescents that need to grow up. So why not look for lessons on moderating users from an internet adult—a site that’s been around longer, that’s weathered serious controversy around content, and that’s had incredible success in doing so? Maybe a site, for example, like Craigslist?

Craigslist is hardly an internet cool kid. Although it’s the 14th most popular website in the U.S., it’s gained a reputation for stubbornly outdated features and design. While the rest of the web seems to be in a constant state of updates and makeovers, Craigslist looks much the same now as it did when it morphed from a mass email list to a website in the late 1990s. It also has a bad rep for attracting fraud, deception, and prostitution.

Even in its infancy, the internet was never all that wholesome. As Finn Brunton chronicled in his book Spam, the web has always hosted attempts to appropriate digital technology in order to make money, legitimate and otherwise. While some think of Craigslist as almost willfully riddled with scammers and obscenity, the site has actually been quietly battling fraud, extortion, and human trafficking for years. These efforts have had surprising success.

Although scammers have always caused problems on Craigslist, the issue came to a head in 2008, when state attorneys general across the country stepped up pressure to address concerns of prostitution and human trafficking via the site’s “erotic services” section. In response, the site began demanding a working phone line to place an ad and later started requiring a small credit card fee (some of which was donated to charity). Within a year, the site reported that erotic services ads had dropped between 90 and 95 percent in five top U.S. markets.

The adult services controversy also helped clarify rules around what’s arguably the most important law protecting internet speech, Section 230 of the Communications Decency Act. The CDA was mostly intended to squash online porn, but Section 230 of the law, with few notable exceptions, protects interactive computer service providers from being responsible for content contributed by their users. The law was put to the test in 2009, when Cook County Sheriff Thomas Dart filed suit against Craigslist, alleging that its erotic services section was a public nuisance because it facilitated crimes like prostitution and human trafficking. The U.S. District Court for the Northern District of Illinois ruled in favor of the classified ads website, concluding that the law “would serve little if any purpose if companies like Craigslist were found liable under state law for ‘causing’ or ‘inducing’ users to post unlawful content in this fashion.” The decision reinforced the idea that internet service providers and digital platforms shouldn’t be legally on the hook for what users did on the web, whether that meant sharing illegal downloads of songs or posting libelous rants in a newspaper comment forum.

Despite winning the lawsuit, Craigslist still decided to make a series of structural changes, including renaming the category from “erotic” to “adult,” to better moderate potentially offensive comment. While the site had previously depended on user-based moderation to deter bad behavior, in May 2009 it also began using trained attorneys to manually screen adult services ads before publishing. The company claims that, in the year following the policy, its lawyers rejected more than 700,000 ads because they fell short of their posting guidelines, resulting “in a mass exodus of those unwilling to abide by Craigslist’s standards,” which were “stricter than those typically used by yellow pages, newspapers, or any other company that we are aware of.”

Consider the gap between this resource-intensive moderation and the Facebook approach. In May, the Guardian ran a series of articles that provided new levels of insight into how Facebook moderation really works, thanks to leaked memos and training materials, including dozens of PowerPoint slides that tell newbie moderators what flies and what doesn’t. Rather than trained attorneys, Facebook outsources moderation to thousands of underpaid and under-resourced workers, many of them overseas. These screeners, typically subcontractors rather than Facebook employees, sort through content users have flagged as obscene or disturbing and then decide whether to ignore, delete, or “escalate” to a senior reviewer. The leaked training materials offer endless lists of do’s and don’ts that make sense at a high level (“credible violence” is not allowed) but become murky at the level of specifics. It’s not OK to threaten the president (e.g. “someone shoot Trump”) but it’s fine to suggest violence against groups (e.g. “let’s beat up fat kids”), even though promoting harm against marginalized groups could be seen as inciting violence. A key issue here is the failure to contextualize—if you’re a young woman who wears a hijab and is painfully aware of recent hate crimes against Muslim women, seeing an anti-Islam post on your Facebook feed might feel deeply and personally threatening, even if it doesn’t mention a specific person or plan. Even though Facebook is using human labor to moderate content, its approach is rooted in a technical ethos, failing to distinguish between different users (and even reviewers). In reality, some users are more marginalized and more in danger than others, and some reviewers could do a better job than others at determining what constitutes risk and what forms of response are necessary.

Perhaps unsurprisingly, the volume of negative, hateful, and obscene material they see leave some with serious psychological fatigue. This baggage is familiar to Craigslist founder Craig Newmark. (Disclosure: Newmark is a donor to New America; New America is a partner with Slate and Arizona State University in Future Tense. Newmark had no role in the assignment or editing of this piece.) After stepping down as CEO in 2000, Newmark decided to focus on content moderation and fraud prevention by working in Craigslist’s customer service group—a job that includes screening user-flagged obscenity or fraud. In addition to doing public outreach about fraud and harassment, Newmark responds to user-reported instances of fraud and works to assist law enforcement. He explains his decision as a commitment to “doing enough to stay in touch with what’s real.” I’ve interviewed Newmark several times in the past few months as part of a larger project about online fraud, and he repeatedly emphasized his goal of working in the trenches to ensure a fairer, less-scammier internet, at one point deadpanning, “I’ve committed to customer service, but only as long as I live.”

The phrase customer service sounds like selling jeans at the Gap, but what’s actually at stake here is a willingness to devote hours of time and expertise to the problem of moderating online content. Fundamentally, of course, Craigslist’s structure is different than social media platforms like Facebook. And I’m no cheerleader for Craigslist, which still has its problems responding to bad actors. But the company offers meaningful lessons as a digital forum that’s been wrestling with moderation issues since before Facebook was open to the general public. There are three primary things Facebook should consider as it continues to work on its moderation policies: It should invest in expert moderators who can make smarter choices. It should have more transparent policies about when and why it allows or removes content. And Mark Zuckerberg should probably cut short his cross-country listening tour and spend that time doing customer service himself.

There’s certainly no easy way to moderate the content of nearly 2 billion people with diverse backgrounds and social norms. But that doesn’t mean there can’t be better outcomes. Big social media companies are starting to realize that scamming, obscene content, and extremism are problems that they might not be able to code their way out of. But they also might not need to reinvent the wheel. Social media companies tend to embrace technical solutions over social ones and to look ahead rather than considering the past. But a 21st-century technology company could learn an awful lot from a site that’s been around the block and seen a lot about the internet’s dark corners.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.