Facebook revenge porn lawsuit: Social network should change abuse-reporting system.

Facebook Needs to Revamp Its System to Report Abuse

Facebook Needs to Revamp Its System to Report Abuse

The citizen’s guide to the future.
Aug. 6 2014 6:49 AM

The Facebook Justice System

The social network needs to change how it deals with reports of abuse.

A woman stares at her computer in horror.
Facebook should offer better recourse than this.

Courtesy of Shutterstock

In December 2013, a man set up a fake Facebook profile in the name of Meryem Ali, a woman in Texas. When the woman’s family and friends clicked on the man’s friend requests, they saw doctored photographs of the woman, she says, cut and pasted to look like porn. One photo featured her head atop a nude body. Another photo showed her having sex. The woman hadn’t consented to posting the images, and she reported them to Facebook. But she heard nothing back from the site for three months, she claims, until the police opened a (rare) criminal investigation. Only after the police asked Facebook for the poster’s identifying information did the fake profile come down.

Ali is now suing the poster (who was a former friend) for intentional infliction of emotional distress, as well she should. She is also suing Facebook, to the tune of $123 million, for failing to respond to her request to remove the fake profile more quickly. That suit, I think, is less deserved—and certainly not the way to get Facebook and other social media sites to protect their users from similar abuse.

In 1996, Congress passed a law, called the Communications Decency Act, which immunizes online providers from being held responsible for most of what their users do and say. In a few cases, individuals have been allowed to sue online providers for breaking promises, and Ali is trying to spin her claim as a broken promise based on Facebook’s terms of service agreement, which bans nudity, harassment, and bullying. But having a policy against those things is not the same as making a promise to each and every user to remove content that contains nudity or amounts to harassment or bullying.


So, Ali’s claims are unlikely to stick. But Facebook (which I have advised in connection with my work as a member of the Anti-Cyberhate Working Group) and other content providers should heed her lawsuit’s message. Ali’s claims express dissatisfaction with the enormous, unchecked power that digital gatekeepers wield. Her suit essentially says: Hey Facebook, I thought that you had a “no nudity” and “no harassment” policy. Other people reporting abuse got results, why not me? Why would you take down photos of women breastfeeding but not doctored photos portraying me as engaged in porn without my permission? Did my complaint get lost in a black hole or was it ignored for a reason?

Facebook could have alleviated a lot of Ali’s frustration by actually responding to her when she first made contact. With great power comes great responsibility, and Facebook needs to improve its terms-of-service enforcement process by creating an official means of review that includes notifying users about the outcome of their complaints. (Right now, Facebook sends an automated message to policy transgressors notifying them that their content has been removed because it “violates Facebook’s Statement of Rights and Responsibilities” without saying more, and, as Ali’s case shows, those reporting abuse do not necessarily hear back from Facebook about its decisions.) Facebook can also improve the enforcement process by ensuring that reports of certain abuse—like harassment, nude images, and bullying—get priority review over others, such as spam. When users are filing complaints, they should be prompted to provide information that would better help staff identify those requiring immediate attention.

Speaking of staff: Computers can’t approximate the contextual judgments of human intelligence, at least not yet. That may mean that Facebook needs to hire more employees to handle complaints. (Right now, Facebook employs hundreds of safety employees in four offices across the globe, but that may not be enough given its scale of 1.3 billion users.)

Of course, Facebook is not the only company reviewing user violations. For small startups, if hiring staff is not an option, companies should recruit users to help them enforce community norms. The multiplayer online game League of Legends has enlisted its users to help address players’ abusive behavior, notably harassment and bigoted epithets, with much success. With a little incentive and some oversight, trusted users can be effective enforcers of a site’s community norms.


Bottom line: Facebook needs to start explaining its decisions when users file complaints, no matter the result. Ali should have been told whether or not Facebook viewed what happened to her as a violation. She should have been told whether or not it would be taking the content down, or what the next step would be. And to ensure the fairness of the process, Facebook should not only notify users of decisions but also permit them to appeal. Of course, Facebook is not our government; it does not have to grant individuals any due process under the law. But it should have an appeals procedure anyway, because when people perceive a process to be fair, they are more inclined to accept its results.

Ali’s claims against Facebook deserve serious consideration, but she shouldn’t win her suit against the company. Undoing the federal immunity for platforms that are trying to protect users from destructive abuse is not the answer. If efforts to influence user behavior through community guidelines, terms-of-service agreements, and the hiring of safety staff amount to legally binding contractual commitments, there will be far fewer efforts to combat abusive speech. Instead, the key takeaway of Ali’s case is that Facebook and its peers need to be more transparent and accountable to users to engender public support. They might not care about doing the right thing for the right reasons (indeed, they may enforce safety policies to keep advertisers or shareholders happy), but clear policies, a means of review, and transparent enforcement decisions will help protect users from destructive abuse, no matter the inspiration.

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.

Danielle Citron is the Morton & Sophia Macht professor of law at the University of Maryland Carey School of Law. She writes about privacy and is the author of Hate Crimes in Cyberspace.