Submission
Privacy Policy
Code of Ethics
Newsletter

János Tamás PAPP: Regulating Online Platforms in the USA: How Section 230 Became a Seemingly Insurmountable Obstacle (Part I.)

In the age of digital interconnectedness, the power and influence of social media platforms have become undeniable. These platforms, initially conceived as networks for friends and family to connect, have transformed into global public squares where news is disseminated, opinions are forged, and movements are born. With this massive influence, there’s a rising call in the United States for regulatory measures on social media. But how does a nation built on the principles of free speech and open discourse strike a balance between oversight and freedom? This is a particularly interesting question in a country where the First Amendment of the Constitution guarantees freedom of speech to all citizens since 1791, stating that “Congress shall make no law (…) abridging the freedom of speech or of the press”. Historically, the U.S. approach to media has been one of minimal intervention, but the digital age, with its unique challenges, has nudged the U.S. to rethink its stance. 

In the early 1990s, the Internet began to become more widespread, with the emergence of sites offering forums, message boards, and other services based on user-generated content. While this helped to promote the use of the Internet, it also led to a number of situations in which the courts had to decide whether service providers could be held liable for user-generated content. In relation to literary publications, under the US defamation tort, distributors are not liable for the books they sell unless they have clear knowledge of the infringing content and are under no obligation to screen the books they sell for this purpose. Courts started to address the liability of internet intermediary providers for infringing content posted on their sites in the 1990s, and most decisions have used this analogy, but there have also been a number of contrary rulings, which have found the intermediary provider liable if it has applied moderation on its site and failed to remove the infringing content (regardless of whether it was aware of it or not).  Thus, the strange situation is that if the provider does not carry out any moderation activity on the site, it is not liable for the content posted there by third parties, whereas if it wishes to proactively moderate the infringing content but does not remove some of it, it may be held liable. 

Although the rules on the scope of freedom of expression and the strength of the defense apply to “speech” on the internet, the specific nature of internet communication has given rise to a number of new questions or questions that have already been answered.  The first truly significant regulation on the issue of the liability of digital platforms for user-generated content is Section 230 of the Communication Decency Act (CDA230), which is part of the US Telecommunications Act of 1996. This section provided Internet operators with a degree of immunity (from the very beginning of their emergence, in fact) that allowed the digital economy in the US to flourish and ushered in a new era of Internet communication and thus freedom of speech. However, since the adoption of this section, the way digital platforms operate has changed significantly and poses new challenges to the regulatory environment.

Enacted in 1996, Section 230 was designed with the intent of fostering a nascent internet. The statute provides immunity to “interactive computer services” from being treated as the publisher or speaker of any information provided by another content provider. In simpler terms, platforms like Facebook or Twitter cannot be held liable for most of the content that their users post. The primary purpose of Section 230 is to allow interactive computer service providers to restrict the display of sexual or violent content without being held liable. The statute’s enacting clause also refers to this purpose, saying the legislature wants to remove “disincentives that impede the development and use of blocking and filtering technologies that allow parents to restrict their children’s access to objectionable or inappropriate online content”. The circumstances surrounding the passage of the law also suggest that Congress did not intend to provide complete immunity, but merely to resolve the controversy.  In eliminating a practice similar to the previous Catch-22 (i.e., that only those who voluntarily moderate content can be held liable), Congress states that no action for removal can be brought against the provider, thus forcing them to moderate the content, and the goal was to encourage moderation, not to establish complete immunity. In enacting the law, Congress was guided by the principle of promoting the continued and vibrant development of interactive computer services with as little government regulation as possible.  According to the “Findings” section of the Act, the growth of the market for interactive services has “represented an extraordinary advance in access to educational and information resources for the citizens of the United States” and “the Internet and other interactive computer services have created a forum for a real diversity of political discourse, have provided a unique opportunity for cultural development, and have multiplied the possibilities for intellectual activity infinitely”.  The rationale for introducing the legislation was also to promote the growth of political discourse and services on the Internet.

The CDA is considered by many scholars to be the legal foundation of the Internet today, one of the most important pieces of legislation protecting online speech, a highly decisive rule that can protect the Internet’s soaring growth.  The legislation was born at the dawn of Internet exceptionalism, proclaiming as a flagship of the new approach that “the Internet is different.”  The Internet, built on these 26 words, as Kosseff put it, had the potential to democratize communication itself for individuals, giving them the freedom to exchange ideas directly with each other in online forums and the ability to create an unprecedented economic boom, a vibrant and competitive free market in the United States of America.  The special regulation that protects digital platforms, and the faltering faith in their proper functioning, is due to the fact that the role of digital platforms has now gone beyond the basic services they provide, and critics argue that it is not capable of addressing some of the harms that can arise from a platform-based economy.

Section 230 gives platform operators immunity both in relation to content uploaded by users and in relation to their moderation activities. Thus, the platform operator cannot be held liable for infringing content uploaded by users, nor can it be held liable for removing any content. As social networking sites have been consistently held by US courts in a number of decisions to be service providers under Section 230, social networking platforms have the right to moderate (“censor”) the content on their services in any way they wish in order to maintain the environment they wish.  In fact, they can remove any content, leave any content untouched, in the knowledge that they have almost no liability for infringing content. 

The regulation does not discriminate on the basis of platform size, meaning that immunity applies to both small blogs and giant platforms with billions of users. Of course, the challenges that the emergence of social networking sites could pose to Section 230 could not have been foreseen when the legislation was drafted. As social networking sites are where most internet users get most of their information, these social media platforms have become an almost inescapable community space, with rules, policies, and ways of operating that have a huge influence on both social and private users. One of the consequences of the broad immunity granted to social networking sites is a phenomenon known as “collateral speech restrictions”.  It is much easier — and cheaper — for a platform to use its Section 230 immunity to remove all risky posts than to engage in serious public relations and communications battles to ensure that the First Amendment is enforced unconditionally.  They are not legally liable under Section 230 for offensive content posted on their platforms or even terrorist propaganda. The California Federal Court ruled in Fields v. Twitter that platform providers cannot be held liable even if terrorists use their sites to spread propaganda.

In short, while Section 230 shields online platforms from liability for most third-party content, it also allows them the discretion to moderate and remove content without facing legal repercussions. This dual-edged nature of Section 230 means that, on one hand, platforms can foster diverse online discourse, but on the other, they can also unilaterally decide what content is permissible. As concerns about misinformation, online extremism, and tech monopolies grow, Section 230 finds itself at the crossroads of discussions about the future of internet regulation and the balance between fostering innovation and ensuring accountability.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.

Print Friendly, PDF & Email