Submission
Privacy Policy
Code of Ethics
Newsletter

János Tamás PAPP: Regulating online platforms in the USA: How Section 230 became a seemingly insurmountable obstacle (Part II.)

As we discussed in our previous post, at its inception, Section 230 was seen as a boon for the internet. It protected burgeoning platforms from a potential onslaught of litigation. Without such protections, these platforms might have been wary of allowing user-generated content, fearing lawsuits at every turn. Given the volume of posts, comments, and shares, it would have been an insurmountable task for platforms to vet every piece of content for potential liability. Thus, Section 230 provided the shield necessary for these platforms to grow and for the internet to flourish as a space for open discourse. However, the very protections that spurred the growth of these platforms have now become a double-edged sword. As these platforms have evolved into influential giants, so too have the complexities of the content they host. Misinformation, hate speech, and divisive or incendiary content have become commonplace. The once-celebrated virtual town squares now carry the potential to distort public perceptions, fuel societal divisions, and even sway elections.

Given these challenges, the call for regulation is understandable. However, the U.S. government’s hands are tied, to a large extent, by Section 230. Any attempts to hold platforms accountable for user-generated content run into the protective wall of this statute. For instance, if a piece of false information is propagated on a platform leading to real-world harm, the platform remains shielded from any liability due to Section 230. This makes it challenging to incentivize platforms to be proactive in managing and moderating content. Every move towards oversight must be measured against the right to freedom of speech. There’s a fine line between curbing harmful content and stirring genuine discourse. Additionally, the global nature of these platforms means that regulations in the U.S. might have implications worldwide, or alternatively, global content can impact U.S. users, complicating the jurisdictional scope.

Moreover, Section 230 blurs the lines between a platform and a publisher. Traditional media entities, like newspapers or television networks, are held to strict standards of accuracy and can be liable for spreading false information. In contrast, social media platforms, while influencing public opinion just as potently, if not more, escape these responsibilities. They enjoy the vast reach and influence of publishers without the accompanying accountability. The dichotomy of Section 230 becomes even starker when one considers the algorithmic nature of these platforms. While they might not create content, they undoubtedly influence its reach. Algorithms decide which content is highlighted on user feeds, potentially amplifying some voices while muting others. This curatorial role is akin to editorial decisions in traditional media, yet the platforms remain absolved of the responsibilities that accompany such power.

Because of Section 230’s protection, social media companies have been largely free to develop their own content moderation policies without fear of legal repercussions. If these platforms decide to remove content or leave it up, Section 230 protects their decisions either way. This autonomy has made it difficult for regulatory attempts that aim to hold platforms accountable for user-generated content or misinformation. Furthermore, any government-led effort to mandate specific moderation practices could run into First Amendment challenges. Section 230 allows platforms to navigate the tension between open forums and moderating content without becoming entangled in consistent legal battles.

A recent decision by a federal appeals court that has eased some restrictions on the Biden administration’s interactions with social media companies. The court determined that the White House, the FBI, and top health officials cannot coerce or significantly push social media companies to remove content deemed as misinformation by the administration, particularly related to COVID-19. Nevertheless, the ruling did narrow an injunction by a Louisiana judge that previously prevented the administration from any communication with social media firms. This injunction will remain in place for the White House, the FBI, the CDC, and the surgeon general, but will not affect other federal officials. The court has allowed the administration a period of 10 days to seek a review from the U.S. Supreme Court. This case originated from two lawsuits, one by a group of doctors and another by a conservative nonprofit organization. Both accused the administration of infringing upon their free speech rights by pressuring social media platforms to censor their content.

Addressing the challenges posed by Section 230 is not straightforward. Repealing it entirely could stifle free speech, as platforms, fearing litigation, might opt for excessive censorship. On the other hand, letting it stand in its current form allows platforms to sidestep the broader societal responsibilities. There’s also a concern about the potential impact on smaller platforms or startups, which might lack the resources for extensive content moderation. Without the protections of Section 230, they could be exposed to debilitating lawsuits. Therefore, regulatory measures that would place more responsibility on platforms for user content have to grapple with the broad immunity granted by Section 230. This isn’t to say that social media platforms can’t be regulated at all, but Section 230 does present a significant hurdle for legislators and policymakers looking to place greater accountability on these companies for the vast amount of content circulating on their platforms.

Section 230, while foundational in shaping the internet we know today, has become a significant roadblock in the path of meaningful regulation of social media platforms. As society grapples with the influence and impact of these platforms, a nuanced reconsideration of Section 230 is imperative. Striking a balance will be complex but essential to ensure that the digital spaces remain open for expression while being safeguarded against their potential detrimental impacts. It’s a testament to the evolving nature of technology and society, where laws once seen as catalysts can become impediments, necessitating reflection and reform.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.

Print Friendly, PDF & Email