Submission
Privacy Policy
Code of Ethics
Newsletter

CDA Section 230: The Bastion—and Achilles heel—of free speech

Social media, online user interaction platforms, and even product reviews in online shops would not exist as we know them today if website operators were held absolutely responsible for every word others wrote on their sites. At the same time, the possibilities, necessity, and limits of moderating content on the Internet are the subject of heated debate. That is why in this post, relying mainly on US law, we will go over the events and decisions that led to the fact that today “anyone can express their opinion freely on the Internet.”

It all started in 1995, with Stratton Oakmont v. Prodigy Services. The first party to the lawsuit, the plaintiff, was Stratton Oakmont, a Long Island-based brokerage firm founded by Jordan Belfort in 1989. The company traded primarily in low-cost over-the-counter “penny stocks” and by the 1990s was the largest over-the-counter stock brokerage firm in the US. The story of the series of frauds that took place within the firm was made public in 2013 at the latest in the 2013 film “The Wolf of Wall Street”.

The second player, Prodigy Services, was an early Internet content provider. Their main profile was to provide information services to their subscribers, such as “bulletin boards” where third parties could exchange information (much like internet forums in today’s sense). The company advertised itself as an online service provider that exercised editorial control over the online services it operated. Geoffrey Moore, Prodigy’s Director of Market Programmes and Communications, has repeatedly told the press that this makes them more like a print press than an uncontrolled online platform.

One of the company’s popular forums was “Money Talk”, where participants discussed financial issues with each other and were moderated by, in today’s parlance, moderators. Then, in 1994, a post appeared on the forum accusing Stratton Oakmont of criminal activity and fraud. Stratton Oakmont sued Prodigy for alleged defamation. The key issue in the lawsuit was whether Prodigy was a publisher or just a distributor of the information published. The question is of relevance because US case law has traditionally drawn a strong distinction between the responsibilities of these two roles.

According to this distinction, a publisher can be “expected to be aware of the content, nature, genuineness or veracity of the material it publishes and should therefore be held liable for any unlawful content it publishes.” By contrast, a distributor is “unlikely to be aware of the content, nature, veracity or truthfulness of material that it makes available on its platform” and may therefore be exempt from liability.

As in the case of Doe v. America Online, citing from the case of Zeran v. America Online, the United States Court of Appeals for the Fourth Circuit holds that the essential element in defamation actions is “publication”, meaning that only the person who publishes can be held liable. Publication refers not only to the publication of information but also to the negligent communication of a defamatory statement or the failure to remove a statement first communicated by others.

From Prodigy’s point of view, the decision was ultimately unfavorable, as the court considered the company to be a publisher due to its statements and moderation policies, and therefore liable for the content it published on its sites. This was in part in contrast to an earlier decision in 1991, when CompuServe, as the operator of a site, was recognized by the court as a distributor only.

The case has given rise to views that the Stratton Oakmont decision will be more of a step backwards, as it effectively encourages internet platform operators not to moderate any content, thereby removing any liability under US law. It is in fact the resolution of this situation that was later embodied in the now famous/infamous “Section 230”.

The Communications Decency Act (CDA), passed as part of the Telecommunications Act of 1996, provides platforms with legal protection from liability for user content, while also allowing them to moderate content. Section 230 of the CDA states that online service providers are not liable for content posted by users and provides them with immunity for their moderation decisions, provided they act in good faith in removing content deemed harmful.

More specifically, Section 230(c)(1) reads as follows:

  • “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The other side of the coin is immediately afterwards, in paragraph (2):

  • No provider or user of an interactive computer service shall be held liable on account of-

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

As the court in Doe v. GTE Corp. explained, the difference between section 230(c)(1) and (2) is that the former “prevents civil liability where web hosting sites and other Internet service providers (ISPs) refrain from filtering or censoring information on their sites”, while the latter ensures that the “provider that filters offensive material is not liable to the censored customer.”

Another difference is that the protection of section 230(c)(2) applies only to actions taken “in good faith.” This “good faith requirement” does not exist in the broader provisions of section 230(c)(1). The courts have explained that this requirement is intended to protect online services that remove objectionable content or wrongly remove non-objectionable content, without protecting those who remove content for anticompetitive or other malicious reasons.

For these reasons, Section 230 has become the bastion of free speech on the internet, and also the Achilles’ heel. By protecting online services from being held liable for third-party content, the provision has opened the way for a variety of now widespread business models that rely on user-generated content, thereby transforming the online economy. In essence, it has opened the way for the moderation activities of knowledge-sharing sites such as Wikipedia, or even for small online shop operators to publish customer reviews of the products they sell.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email