In October 1994, an unidentified user of a bulletin board hosted by an online service provider, Prodigy.com, posted an item that was to have far-reaching consequences. The post claimed that a Long Island brokerage firm called Stratton Oakmont had committed criminal and fraudulent acts in connection with the initial public offering (IPO) of another company.
Stratton Oakmont sued Prodigy and the unidentified poster for defamation – and won. Prodigy argued that it couldn’t be held responsible for what anonymous users posted on its platform. The judge disagreed, arguing that the company was liable as the publisher of the content created by its users because it exercised editorial control over the messages on its bulletin boards in several ways and was thereby potentially liable for any and all defamatory material posted on its websites.Advertisement
The case alarmed an Oregon congressman (now a US senator), Ronald Wyden, who accurately perceived it as a mortal threat to the growth of the internet. It would mean that every online hosting service would need to have lawyers crawling over its site, thereby slowing exploitation of the technology to a crawl. So with another congressman, Chris Cox, he inserted a short clause – Section 230 – into the Communications Decency Act, which was then incorporated in the sprawling 1996 Telecommunications Act. The section itself is short (about a thousand words) but the core of it is a single sentence: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
That sentence laid the basis for everything that has followed. It constitutes, as the title of a recent book puts it, The Twenty-Six Words that Created the Internet. What it does is create a “liability shield” for online platforms. It does not prevent them from moderating content, if that’s what they choose to do. But what it has done is enable these platforms to scale up at an exponential rate. Thus users can upload videos to YouTube, post reviews on Amazon and TripAdvisor or classified ads on Craigslist; and it enables Facebook and Twitter to offer social networking to millions – all without incurring legal liability for what those folks do on their platforms. Section 230 is thus the get-out-of-jail secret behind countless internet fortunes. And if it were to be repealed tomorrow, many of those platforms would shrivel.
Not surprisingly, given the manifold abuses of surveillance capitalism and the societal damage that the tech giants are causing, people are beginning to wonder if Section 230 ought to be revised. The answer to that is yes, but it’s a bit like the question of whether we should give up burning fossil fuels tomorrow. In both cases, the problem is that there are too many vested interests to extricate ourselves before catastrophe strikes.
Although Section 230’s purpose was clear, its language was not. Since it did not make the liability protection conditional on responsible content-moderation practices, lawyers have persuaded US courts to interpret the statute as providing a sweeping immunity far beyond what its authors intended or imagined. As a result, says Danielle Keats Citron, a noted legal scholar, courts have massively overextended Section 230’s legal shield; platforms have been protected from liability no matter how irresponsible their conduct and no matter how grave the harm inflicted. The statute, she writes, “has been extended to protect sites whose business is revenge porn, whose operators choose to post defamation, and whose role is getting a cut of illegal gun sales”.
The prospects for revision of Section 230 in the US seem remote. Although lawmakers’ hostility to the tech giants has greatly increased, their animus is too incoherent to be effective. Conservatives think that S230 allows the companies to escape responsibility for discriminating against conservative political viewpoints. The Electronic Freedom Foundation, on the other hand, sees it as “one of the most valuable tools for protecting freedom of expression and innovation on the internet”. And so it goes on: in the land of the first amendment, every dog has his day – or at least his say.
With the 20/20 vision of hindsight, Senator Wyden’s clause might be regarded as a sensible statute for the mid-1990s, when the web was just getting going and before companies like Google and Facebook existed. But that was a long time ago, and things have changed out of all recognition since then. Because of the way the clause’s reach was inexorably extended by ingenious lawyers, permissive judges, inattentive legislators and determined tech-company lobbying it has become a tool for undermining democracy. It does this by enabling tech companies – especially social media – to behave with impunity because they don’t have legal liability for the consequences of the activities that their platforms (and their business models) enable. It effectively says that there are some powerful agents in society that can escape responsibility for the harm that they do. Impunity on this scale is incompatible with a functioning democracy.