Privacy Policy
Code of Ethics

The Unintended Consequences of European Content Removal Laws on Free Expression

The regulation of the internet is being reshaped by stringent content regulation laws, particularly in Europe. These laws, aimed at curbing speech not aligned with European standards, have significant consequences for global platforms. The Future of Free Speech’s recent report highlights how these laws lead to the removal of a vast amount of legally permissible content, particularly in France, Germany, and Sweden.

Over the past decade, several European countries have implemented laws targeting content, from terrorist propaganda to hate speech. American tech companies, due to their overseas operations, must comply or face hefty fines and potential market exclusion. However, these regulations often result in the unintended removal of legal content, underscoring the complexity and overreach of such policies. The report from The Future of Free Speech, a non-partisan think tank at Vanderbilt University, reveals the high incidence of legal content being removed from platforms like Facebook and YouTube. This phenomenon highlights the broader implications of the law of unintended consequences—a principle suggesting that legislative actions can often yield outcomes contrary to their intentions. When communication services are forced to rapidly respond to vaguely defined content laws, they tend to over-correct, leading to the removal of much legally permissible content. The analysis shows that a majority of removed content—between 87.5% and 99.7%, depending on the country and platform—was legally permissible.

Germany, with its strict content removal laws such as the Network Enforcement Act (NetzDG), sees the highest percentage of legally permissible content being removed. For instance, 99.7% and 98.9% of deleted comments on Facebook and YouTube, respectively, were legal. This over-removal is a protective measure by platforms to avoid severe penalties, leading to significant collateral damage to free expression. Sweden and France also exhibit high rates of legally permissible content being deleted, though to a slightly lesser extent. In Sweden, 94.6% of removed comments on both Facebook and YouTube were found to be legal, while France saw 92.1% and 87.5% on Facebook and YouTube, respectively. These figures, derived from a substantial sample of nearly 1.3 million comments, highlight the pervasive issue of over-censorship driven by regulatory pressures. The majority of removed content consisted of general expressions of opinion, i.e. statements that neither contained hate speech nor violated any legal standards. This suggests that algorithms and moderation teams, in an effort to comply with complex legal frameworks, are overly cautious, thus stifling legitimate discourse. Comments expressing support for controversial candidates or discussing sensitive topics in abstract terms were often removed despite being legally permissible.

YouTube experienced the highest deletion rates, with Germany seeing 11.46% of comments removed, France 7.23%, and Sweden 4.07%. On Facebook, the corresponding figures were substantially lower: 0.58% in Germany, 1.19% in France, and 0.46% in Sweden. A significant portion of these deleted comments—more than 56%—were categorized as “general expressions of opinion.” The report indicates that less than 12.5% of the deleted comments were illegal, suggesting that over-removal of legal content is a more significant problem than under-removal of illegal content. Moreover, transparency in content moderation is notably lacking. Only 25% of the pages and channels examined publicly disclosed their specific content moderation practices. This lack of transparency can generate uncertainty among users, who may not know whether specific content rules apply beyond the general policies of platforms like Facebook and YouTube. The opaque nature of these practices underscores the need for clearer, more transparent guidelines.

This scenario raises critical questions about the effectiveness and fairness of these content regulation policies. Are these laws achieving their intended goals, or are they simply creating a more restricted and less vibrant online environment? The evidence suggests the latter. The geopolitical landscape and national security concerns further complicate this issue, with governments using blunt legislative tools to combat misinformation and foreign interference, inadvertently encouraging platforms to over-moderate content. This report points to a troubling trend: as the political and social climate evolves, the standards for content moderation are constantly shifting. However, the penalties for non-compliance remain rigid, leading to a perpetually cautious approach by tech companies. This dynamic almost ensures that the issue of over-removal of legal content will persist and possibly worsen.

While some European governments claim to understand the delicate balance between freedom of expression and security, their actions often suggest otherwise. The report makes it clear again, that content moderation at scale is an inherently flawed process. Despite this, regulators frequently dismiss these challenges, viewing them as mere excuses rather than legitimate concerns about the feasibility of meeting numerous and often conflicting legal requirements. The findings from this report should prompt a reconsideration of current content regulation strategies. Instead of rigidly enforcing laws that lead to significant collateral damage, there should be a focus on developing more nuanced approaches that better balance the need for security with the preservation of free expression. This might involve clearer guidelines for content moderation, more robust appeals processes for removed content, and greater transparency in how moderation decisions are made.

The ongoing adherence to the law of unintended consequences highlights the need for legislators to anticipate the broader impacts of their policies. As governments continue to believe in the feasibility of their regulations, despite practical evidence to the contrary, the internet risks becoming a less open and free space. The belief that any legislative decree is inherently achievable must be tempered with a realistic understanding of the complexities involved in global content moderation. The unintended consequences of European content removal laws are leading to the suppression of legal speech and a less diverse online discourse. As these regulations continue to evolve, it is crucial for policymakers to consider the broader implications and strive for a more balanced approach that safeguards both security and freedom of expression. The future of free speech on the internet depends on finding this equilibrium. The assumptions of a digital Wild West and the urgency to address harmful online content have driven intense public pressure, but these assumptions are not reflective of the empirical reality. The findings from this report suggest that the current system undermines freedom of expression, with high levels of legal content being removed. This raises significant concerns about the role of private social media giants in steering digital discourse and the potential spillover effects of such stringent regulations. Policymakers must consider the broader implications of their regulations and strive for a more nuanced approach to ensure a vibrant and free online discourse.

János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses

Print Friendly, PDF & Email