Jurisprudence

How the Wolf of Wall Street Created the Internet

In a suit to protect his company’s reputation from angry online commenters.

Leonardo DiCaprio in The Wolf of Wall Street
Leonardo DiCaprio in The Wolf of Wall Street

Still courtesy of Paramount

The Wolf of Wall Street, Martin Scorsese’s splashy three-hour portrait of financial excess, has produced a surprising amount of Internet controversy: defensive interviews with Leonardo DiCaprio, a scorched-earth “open letter” from the daughter of a convicted trader, and the usual outlandish and bloodthirsty commentary that always peppers online public forums (from desires to “airstrike” a Wall Street movie theater or “cue the guillotines” for bankers, to the typical declarations that theft is “routine” at Goldman Sachs and bankers are “sociopaths”).

We take the cacophony of the Internet for granted, but two decades ago it wasn’t obvious that it would develop this way. And it turns out we have the real wolf of Wall Street to thank in part for the Internet we’ve got. The rules that allow for our rollicking, easily scandalized Internet would not be the same without Jordan Belfort, the convicted stockbroker who is the model for DiCaprio’s character, and his felonious firm, Stratton Oakmont. In the mid-1990s, Stratton Oakmont started the lawsuit that led to much of the basic legal framework that governs Internet content. Were it not for that lawsuit, and the strange ruling and strong backlash it created, the Internet as we know it might be a very different place.

First, some background. (Yes, that headline was partly a trap: This article will now attempt to explain a complicated area of law.) Traditionally, liability for wrongdoing can extend beyond the parties immediately involved. Employers can be liable for accidents caused by their employees. Bartenders can be liable for serving customers who get behind the wheel. And, in the world of words and ink, publishers can be liable for the materials produced by their authors.

In the early days of the Internet, it was unclear whether and how a variety of new online services—the first chat rooms, message boards, aggregators, and the like—were responsible for the content produced by their third-party users. Possible lawyerly analogies abounded. Were these new Internet service providers more like libraries and booksellers, responsible only as “distributors” and liable only for unlawful content they failed to take down? Or were they more like newspapers and magazines and considered “publishers,” which meant they would be on the hook for defamation, just like the authors?

These questions were pressing because of the sheer number of users generating online content. Even if only a small percentage of them do illegal things—posting nonconsensual nude pictures or spreading intentionally malicious lies—a small percentage of millions of users adds up to a lot of liability. That liability would give ISPs and websites pause before letting their users post and publish freely.

Enter Stratton Oakmont. In the mid-1990s, the brokerage house and its then-President Danny Porush (the basis for Jonah Hill’s Donnie Azoff), sued Prodigy Internet Services over a series of anonymous and allegedly libelous postings on Prodigy’s Money Talk “computer bulletin board.” Among other things, the anonymous comments stated that Porush was a “soon to be proven criminal” and Stratton Oakmont was a “cult of brokers who either lie for a living or get fired.” (Go figure.)

The New York Supreme Court decided that Prodigy should be treated as the publisher of the anonymous comments. The court was a little fuzzy on the reasoning but emphasized that Prodigy was a self-styled “family orient[ed] computer network” that had actively edited a portion of the voluminous content it received. The basic idea seemed to be that because Prodigy edited some of its user-generated content, it made itself responsible for all of that content. That meant Prodigy could have had to pay damages to Stratton Oakmont.

The Internet, circa 1995, freaked out. Before this ruling, it looked like the Prodigies of the world might not be responsible for reviewing content that their users generated. But the Stratton Oakmont decision raised the worrying prospect that a company’s good-faith attempt to filter some content would expose it to crushing liability—not exactly a great incentive to sort through potentially libelous posts.

Congress agreed that was a concern. In 1996, it responded with Section 230 of the Communications Decency Act, which sought to undo the Stratton Oakmont decision by guaranteeing that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (A Senate conference report stated explicitly that the point of this provision was to “overrule Stratton Oakmont v. Prodigy and any other similar decisions which have treated such providers and users as publishers or speakers of content.”) Section 230 is a strong provision, and it’s gotten a lot stronger in the hands of the courts: Judge J. Harvie Wilkinson III of the 4th U.S. Circuit Court of Appeals, for example, has ruled that Section 230 frees ISPs of both publisher and distributor liability—which means that they don’t even have an obligation to take defamatory content down.

To be sure, it’s not clear that Wilkinson’s rule is the right one. In fact, it probably isn’t. In most other contexts, third-party liability serves an important disciplining function: It makes the relevant gatekeepers financially responsible for costs they impose on society. Section 230, as interpreted by the courts, enables a lot of terrible stuff, like revenge porn, leaving victims without much recourse. (A few states have started trying to change that.) But Section 230 also helped enable the unrestricted commenting, sharing, and searching we take for granted on the Internet. Does Google “publish” those billions of potentially libelous search results? Does Yelp “publish” those millions of potentially defamatory reviews? Section 230 says the answer is no, which means that companies can let us post and comment without making a billion precautionary editorial judgments about the content. As Wilkinson has argued, “It would be impossible for service providers to screen each of their millions of postings for possible problems.” Thirty years ago, it was feasible for the New York Times to double-check every word for defamatory content. For Twitter, it’s not.  

It’s conceivable that we could have ended up with a different set of rules governing how websites are responsible for their users’ content—rules that would have made the Internet much more risk averse about letting its users run wild. But Stratton Oakmont—by gamely attempting to defend its reputation as an upstanding firm—helped produce the rules we now have instead. So the next time the Internet flips out about moral debauchery of Stratton Oakmont, remember there’s a sense in which we have Stratton Oakmont to thank.