Submission
Privacy Policy
Code of Ethics
Newsletter

Free Speech Summer: The U.S. Supreme Court’s Recent Opinions Regarding Online Content Moderation (Part II.)

In the first part of this blog post, we explored the Supreme Court’s landmark decisions in Moody v. NetChoice and NetChoice v. Paxton, which addressed the contentious issue of state regulation of social media platforms and their First Amendment rights. In this second part, we deal with the Supreme Court’s decision in Murthy v. Missouri and what these three cases could mean in the future of online speech regulation.

The Supreme Court’s decision in Murthy v. Missouri (No. 23-411) revolves around allegations that federal officials engaged in a coordinated campaign to suppress certain viewpoints on social media platforms during the 2020 election and the COVID-19 pandemic. During the 2020 election and the subsequent COVID-19 pandemic, social media platforms like Facebook, Twitter, and YouTube took extensive measures to manage the content posted on their sites. They flagged, removed, or demoted posts that were deemed to contain false or misleading information. This period saw heightened concerns from federal officials about the spread of misinformation that could undermine public health initiatives or the integrity of the electoral process. These officials communicated frequently with social media companies to discuss their content moderation practices and to flag problematic posts.

The plaintiffs, including two states (Missouri and Louisiana) and five individual social media users, filed a lawsuit against various Executive Branch officials and agencies. They claimed that these officials coerced social media platforms into suppressing speech that should be protected under the First Amendment. According to the plaintiffs, this coercion resulted in a chilling effect on free speech, effectively turning private content moderation into state action. The Fifth Circuit Court of Appeals initially ruled in favor of the plaintiffs, stating that the government’s communications with social media companies amounted to coercion. The court affirmed a preliminary injunction that barred federal officials from continuing to engage in these types of communications. The court’s decision was based on the premise that the government’s actions had a significant influence on the platforms’ content moderation decisions, thus infringing on free speech rights. However, when the case reached the Supreme Court, the focus shifted to the issue of standing. Standing is a legal principle that determines whether a plaintiff is sufficiently affected by the matter at hand to warrant their participation in the case. Justice Barrett, writing for the majority, reversed the Fifth Circuit’s ruling, emphasizing that the plaintiffs did not have standing to seek a preliminary injunction.

The Supreme Court outlined several reasons for this decision. Firstly, the Court found that the plaintiffs had not demonstrated a substantial risk of future harm that was directly traceable to the actions of the government defendants. The intense communications between federal officials and social media platforms had significantly decreased by 2022, rendering any potential future harm speculative. This lack of ongoing communication undermined the plaintiffs’ claim that they were at substantial risk of future harm. Additionally, the plaintiffs argued that they continued to suffer from self-censorship due to past restrictions, fearing future government-induced content moderation. The Court dismissed this argument, stating that self-inflicted harm based on fears of hypothetical future harm does not establish standing. The decision referenced the precedent set in Clapper v. Amnesty International USA, which held that fear of hypothetical future harm that is not certainly impending cannot create standing. The plaintiffs also proposed a “right to listen” theory, asserting that the First Amendment protects their interest in receiving information and engaging with other speakers on social media. The Court found this theory overly broad and insufficient for establishing a concrete and particularized injury necessary for standing. The decision referenced Kleindienst v. Mandel, which recognized a First Amendment right to receive information but only where the listener has a specific, direct connection to the speaker.

Justice Alito, joined by Justices Thomas and Gorsuch, dissented. The dissent focused on the procedural aspect of standing and the potential harm caused by denying intervention to individuals like Robert F. Kennedy Jr., who claimed direct harm from the government’s actions. Justice Alito argued that allowing intervention would not unduly prejudice the parties and would ensure that individuals affected by the government’s alleged coercion could seek timely redress. The dissent highlighted the potential delay and harm to Kennedy’s presidential campaign if intervention were denied.

The decision emphasizes the need for plaintiffs to demonstrate concrete, particularized injuries directly traceable to government actions to establish standing in First Amendment cases. The ruling underscores the Court’s cautious approach to broad theories of standing that could potentially open the floodgates to numerous lawsuits against government actions perceived as influencing private entities. This has noteworthy implications for future cases involving government communications with private entities, particularly in the realm of content moderation on social media. It sets a precedent that mere government communication, without direct coercion or substantial risk of future harm, is insufficient to establish standing for First Amendment claims.

The plaintiffs’ argument touched on a grey area where government communication and private content moderation intersect. If government officials are perceived as coercing or heavily influencing content moderation decisions, it raises questions about the independence of these platforms and the protections afforded by Section 230. The Supreme Court’s decision highlights the delicate balance between allowing government entities to communicate with social media companies on matters of public concern, such as misinformation, and preventing undue government influence that could violate free speech principles. This could set a precedent for how courts may handle future cases involving government interaction with private social media companies. By denying standing to the plaintiffs, the Court has signaled that speculative harms or generalized grievances are insufficient grounds for First Amendment claims. This decision will likely influence how lower courts evaluate similar cases, reinforcing the need for plaintiffs to show specific, direct injuries resulting from government actions. Moreover, the decision underscores the importance of maintaining a clear distinction between government action and private moderation. As social media platforms continue to play a central role in public discourse, ensuring that these platforms can operate independently of government coercion while still engaging with public concerns remains a critical challenge.

The Supreme Court’s recent decisions in these cases collectively mark a significant juncture in the intersection of state regulation, digital platforms, and First Amendment rights. These cases illuminate the judiciary’s careful and nuanced approach to balancing technological innovation with constitutional protections.

In Moody v. NetChoice, the Court highlighted the potential infringement on editorial discretion, a core First Amendment right, by state laws compelling platforms to host specific speech. Similarly, the Court’s refusal to grant certiorari in John Doe v. Snap, Inc. underscores the robust protections afforded to social media platforms under Section 230 of the Communications Decency Act. Despite dissenting opinions advocating for a reassessment of Section 230’s scope in light of modern digital interactions, the Court upheld the existing legal framework. This decision reflects the Court’s cautious approach to altering established legal protections, even amid growing concerns about platform accountability and user safety. Also, Murthy v. Missouri further illustrates the Court’s insistence on concrete, particularized injuries to establish standing in First Amendment cases. The Court found the plaintiffs lacked standing due to speculative future harm and self-inflicted censorship, which underscores the Court’s requirement for tangible, direct injuries in claims involving government communication and private content moderation.

These decisions collectively reflect the Supreme Court’s nuanced understanding of the complexities involved in regulating digital platforms. The rulings emphasize the importance of protecting editorial discretion while recognizing the need for precise, context-specific judicial review. The Court’s cautious approach to standing and its reluctance to broadly reinterpret established legal frameworks like Section 230 indicate a preference for incremental changes over sweeping reforms.

The ongoing legal battles and the detailed scrutiny mandated by the Supreme Court will continue to shape the landscape of internet regulation and First Amendment rights. As these cases return to the lower courts, they will influence how courts and lawmakers navigate the delicate balance between state regulatory interests and the constitutional protections afforded to private companies in the digital age. This careful balancing act will be crucial in ensuring that social media platforms can operate independently of undue government influence while addressing public concerns about fairness and accountability in digital communication.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses

Print Friendly, PDF & Email