Future Tense

You’ll Never Guess This One Crazy Thing Governs Online Speech

Hint: It’s not the First Amendment!

free speech twitter.

Illustration by Slate. Image by Prisma Illustration/Thinkstock.

Early last week, Twitter announced that it would be using new tools to curb hate speech and harassment on its site. The news came on the heels of a tell-all report on BuzzFeed that chronicled how 10 years of dogmatic commitment to “free speech” combined with persistent mismanagement led to the popular social media app becoming “a honeypot for assholes.” Twitter’s former head of news, Vivian Schiller, told BuzzFeed, “The whole ‘free speech wing of the free speech party’ thing — that’s not a slogan, that’s deeply, deeply embedded in the DNA of the company.” That ethos made it all the more difficult to regulate abuse on the site.

But absent from discussion is a more fundamental question: Should we be using the notion of “free speech” to understand online speech at all?

As a general matter, it’s important not to confuse the First Amendment with the broader notion of free speech. Free speech policy is about the First Amendment kind of like how Cheez Whiz is about dairy products: They are related, but fundamentally different. The First Amendment protects “free speech” by saying that the government cannot (with certain important exceptions) prevent you from speaking. But private individuals or corporations, like Twitter, are not covered by the First Amendment and can curate or even censor speech without violating the law. In fact, some have argued that a platform’s right to keep up and take down what’s posted there is its own free speech right. Others have pointed out that not policing for abuse has a chilling effect on speech.

Twitter’s rigid adherence to being the so-called “free speech wing of the free speech party” seems reminiscent from a scene from the cult classic movie The Big Lebowski. Vietnam veteran Walter Sobchak (John Goodman) is sitting in a diner having a loud and animated conversation with his friend Jeff “The Dude” Lebowski (Jeff Bridges). A waitress gently asks Walter to lower his voice, because “this is a family restaurant,” a request that sends Walter into an apoplectic fit, screaming to the waitress that “the Supreme Court has roundly rejected prior restraint.”

“Walter, this isn’t a First Amendment issue,” The Dude says before walking out in frustration.

So if the First Amendment doesn’t constrain how speech is regulated by online platforms, what does? And what should?

One of the main forces governing speech online is the same thing that governs Walter’s speech in his local diner: societal norms. Norms are customary standards for behavior that are shared in a community. They can be self-enforced by a person’s desire to fit in with the group and conform, and they can also be externally enforced by the group when an individual violates the norm. Speaking at a lower volume in a public place is one kind of norm and shaming a person who yells loudly is a way that norm is enforced.

But while geographically bound communities have had thousands of years to evolve norms in real life, developing expectations for behavior on a global internet is still in its nascent stages. This is especially true for online speech, says Nicole Wong, the former vice president and deputy general counsel at Google who helped establish YouTube’s public policies on speech. Over the last 20 years, says Wong, online speech has been undergoing a “norm setting process” that is different and much faster than previous responses to technological advances in publication platforms. “We’re still in the middle of how to think about the norms of behavior when what is considered ‘appropriate’ speech is constantly iterating,” says Wong. “If you layer the changes in technology over a broadening array of cultural, racial, national, global perspectives, it is hard to pin down principled, universal social norms, let alone create policy to reflect them.”

The task of creating policy for governing online speech falls not to governments, but to platforms. Individual platforms that host user’s content—like Twitter, Facebook, Tumblr, or YouTube—are each responsible for creating policies that reflect the online speech norms of the community the platform wants to create. Those policies most often take the form of a platform’s Terms of Service or community guidelines. For example, at YouTube, such policies prohibit the posting of pornography or sexually explicit content; at Facebook, community standards ban the posting of content that promotes self-injury or suicide.

Having those policies in place is important, but equally important is having a system in place that is nimble enough to allow for changes in that policy as norms evolve. And while platforms like Twitter have historically struggled in this capacity, others, like Facebook, have excelled. After the site’s policy on female nudity resulted in takedowns of women posting photos of their mastectomy scars, Facebook created an exception to its policy. When similar outcry erupted over the removal of breast-feeding photos, the policy changed again.

“What we do is informed by external conversations that we have,” explained Monika Bickert, Facebook’s head of global policy, in an April interview with the Verge. “Every day, we are in conversations with groups around the world. … So, while we are responsible for overseeing these policies and managing them, it is really a global conversation.” Facebook’s flexible responsiveness to the expectations of its community might be one reason its user base keeps growing while Twitter’s stagnates.

The underlying principle that Facebook has managed to grasp and put into motion is that digital speech is about much, much more than Twitter’s black and white notion of “free speech.” Online speech is not about simple speech absolutes. It’s about developing a global system of governance that can empower the most, while harming the least.

Talking about online speech in terms of “free speech” isn’t incorrect, it just misses so much of the picture. Or, more accurately, as The Dude might counsel, “You’re not wrong, Walter, you’re just an asshole.” The sooner we start thinking of online speech not only in terms of “free speech” but in terms of responsible and responsive platform governance, the sooner we create the internet we want.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.