
Content Moderation on Social Media Platforms in the EU
In 2022, the EU adopted the Digital Services Act (DSA) embracing a comprehensive approach toward content moderation on social media platforms that supplements the existing self-regulatory mechanisms with legal safeguards. The article highlights the key characteristics and tools of the regulatory approach aimed at ensuring transparency and accountability on behalf of the digital platforms to regulators and users while addressing issues related to illegal and harmful content, including misinformation, disinformation, and propaganda.
In the 2020s, the debate over content moderation on social media platforms has become essential to discussions on democracy and human rights in the digital age. The debate is driven by the increasing role of these platforms in shaping public opinion and influencing the democratic process, and the complex challenge of balancing freedom of expression with the regulation of harmful and illegal content spread on social media. Striking a balance between individuals’ right to global access to information and free expression, the economic interests of private companies controlling social media platforms, and the state’s duty to ensure security presents a significant challenge for contemporary society. To address these concerns, the European Union (EU) has adopted a legislative framework aimed at safeguarding information integrity and protecting fundamental rights within the digital environment, central to which is the Digital Services Act (DSA).
In 2022, the EU adopted DSA establishing an updated legal regulation for content moderation on online platforms to counter the spread of illegal and harmful content, including disinformation, misinformation, and propaganda. Simultaneously, the previously adopted self-regulatory mechanisms, the EU Code of Practice on Disinformation and the EU Code of Conduct on Countering Illegal Hate Speech Online, continued to be applied. In early 2025, the European Commission and the European Board for Digital Services formally endorsed the integration of these documents as Codes of Conduct within the framework of the DSA to facilitate enhanced oversight by EU institutions over their implementation. At present, the DSA constitutes a key element of the EU comprehensive strategy to ensure information integrity in the digital environment, articulated under the European Democracy Shield which also includes other legislative acts, such as the Artificial Intelligence Act, the European Media Freedom Act, and the Regulation on Transparency and Targeting of Political Advertising.
Although the full and effective implementation of the DSA was planned to begin on February 17, 2024, it faces some challenges. As an EU regulation, the DSA is directly applicable across all Member States without requiring transposition into national law. Nevertheless, its practical enforcement, compliance frameworks, and institutional oversight remain mainly the responsibility of national authorities, whose implementation has been delayed in some Member States. As a result, the change in content moderation procedures is still not clearly noticeable. Additionally, the EU regulatory approach has been criticized for its complexity reaching the level of overregulation, which raises concerns about the potential to censor minority viewpoints and restrict innovation in the sector. In this context, it is important to highlight some of the key characteristics and tools of the regulatory approach defined in the DSA that will affect content moderation procedures on social media.
With the adoption of the DSA, the EU embraces a comprehensive approach to create the so-called “co-regulatory backstop ” that supplements self-regulatory mechanisms with legal safeguards, ensuring transparency and accountability on behalf of the digital platforms to regulators and users, while avoiding content censorship. Further, the DSA provides for soft law instruments, aimed at specifying the general rules of the regulation and thereby facilitating its implementation by its addressees. For example, based on the granted competence, before the EU elections in June 2024, the European Commission issued guidelines for VLOP and VLOSE providers on mitigating systemic risks to electoral processes. Following the results of the elections, the Commission published a best–practice election toolkit on DSA in February 2025, providing practical guidance for Digital Services Coordinators on their engagement with VLOPs and VLOSEs to tackle risks such as hate speech, online harassment, and the manipulation of public opinion, including threats arising from AI-generated content and identity misrepresentation.
The EU approach requires each EU Member State to designate a national authority as a Digital Services Coordinator (DSC) responsible for overseeing the DSA implementation at the national level. The designated authority is required to operate in an impartial, transparent, and timely manner and be able to make decisions independently of other public institutions or private organizations. Following the appointment of the national coordinators, platform users will be able to submit complaints directly to them against social media platforms that violate their obligations established under the DSA. The regulation specifies that the European Commission will directly supervise social media platforms designated as Very Large Online Platforms (VLOPs) based on having an average of at least 45 million active monthly users within the European Union. In May 2024, the European Commission designated the following social media platforms as VLOPs: Facebook, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Twitter (X), and YouTube. It should be noted that for these platforms, the application of the DSA began before the date scheduled for the effective implementation of the entire regulation.
The DSA upholds the principle, established previously in the US and EU regulatory frameworks, that social media platforms are not legally liable for illegal content shared by their users if they were unaware of it or removed it immediately upon becoming aware. According to the DSA, illegal content is considered any content contrary to EU law or the law of a Member State. The DSA does not impose a general obligation on social media platforms to monitor or search for illegal content in the information shared by their users. The regulation requires online platforms to implement notice-and-action mechanisms allowing users to notify them of specific information on their service that the notifier claims is illegal. These systems must be easily accessible and allow electronic submission of notifications. The DSA sets standards for the processing of notifications, stipulating that social media platforms must act promptly, in good faith, impartially, and objectively. The social media platforms are required to inform the notifier of the decision made, whether automated means were used to make it, and the available legal remedies. Additionally, social media platforms must provide each affected service recipient with a clear and specific explanation of the reasons for any imposed access restrictions to content when it is considered illegal or in violation of their terms of service. This includes measures such as limiting access to content, temporary blocking, or account termination. The explanation must be clear, easy to understand, and as accurate and specific as possible, reflecting the particular situation.
Further, social media platforms must implement an easily accessible internal system for reviewing complaints related to content moderation decisions. Platforms must ensure that decisions are made under the supervision of staff with relevant qualifications, rather than relying solely on automated tools. In addition to the legal means for challenging platform decisions, the DSA provides for the establishment of alternative dispute resolution bodies certified by the Digital Services Coordinators. These bodies may be created or supported by the state. When a social media user files a complaint with such a body, the platforms cannot refuse to participate in the procedure for resolving the complaint.
Through the DSA, the EU seeks to establish standards for procedures implemented by private online platforms, ensuring they align with the principles of legality outlined for legal regulation. The aim is to address the inherent shortcomings of private regulation by establishing standards for transparency and procedural fairness in decision-making processes conducted by private organizations. The DSA rules apply not only to illegal content but also to harmful content, including disinformation, misinformation, and propaganda. The platform policies must comply with the EU Code of Practice on Disinformation and the EU Code of Conduct on Countering Illegal Hate Speech Online. Thus, the procedures outlined in the DSA also apply to such content.
In June 2024 the European Commission created the DSA Transparency Database, where online platforms are required to report their decisions on content moderation procedures, as well as the reasons behind them. As of now, the database has recorded more than 10 billion notifications, with nearly 50 percent of them being processed through automated means. The main purpose of the database is to create transparency regarding content moderation decisions, although the process of how the collected information will be handled and analyzed is still under discussion.
The DSA grants national Digital Services Coordinators the authority to register organizations as trusted whistleblowers. The names of these organizations are published by the European Commission in a publicly accessible database, and their notifications must be given priority by social media platforms. Online platforms may contest the status of these organizations before the national Digital Services Coordinator if they believe that incorrect notifications have been submitted. Furthermore, the EU Code of Practice on Disinformation includes, as part of its measures to combat disinformation, the collaboration of participating social media platforms with fact-checking organizations. Social media platforms designated as VLOPs are also required to conduct periodic risk assessments specific to their services and proportionate to the systemic risks related to the spread of illegal content, as well as the actual or foreseeable negative impacts on human rights, civil dialogue, electoral processes, and public safety.
The EU regulatory approach aims to provide stronger guarantees for users, ensuring that content moderation procedures will be transparent and well-reasoned, with their perspectives also considered in decision-making. At the same time, there is a noticeable strengthening of the role of national states through competent public institutions in overseeing the operations of social media platforms, including the content moderation process. This approach aligns with Europe’s political and legal tradition but may be difficult to understand within a more libertarian context, such as in the United States.
In Europe, Facebook is the social media platform with the largest market share, accounting for nearly 80% of it Meta, the parent company of Facebook, has developed and implemented policies and community standards that apply globally to its 3 billion users. When a notification regarding illegal content is received, Meta assesses whether the content violates its standards. If it does not, the illegal content is removed only within the jurisdiction where it has been classified as such but remains accessible in the rest of the platform. If the content violates the platform’s standards, it is removed globally. In terms of determining whether content complies with or violates the platform’s standards, the company has committed to following the decisions of the Meta Oversight Board. In resolving content moderation disputes, the Board takes into account the specific national context in which the content has emerged, with a view to objectively assessing the impact of its dissemination. It then applies the UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, the UN International Covenant on Civil and Political Rights (ICCPR), and other relevant UN instruments. In this regard, their work contributes to the establishment of global policies and standards for social media.
It is still too early to assess the actual impact of the DSA in relation to social media content: whether it will succeed in countering illegal content and limiting the spread of harmful content without infringing on freedom of expression, particularly for those who express less popular views differing from dominant political, economic, and social narratives. It appears that the adoption of the DSA will provide a stronger basis for resolving content moderation disputes related to users in Europe, taking into account the European Convention on Human Rights and Fundamental Freedoms, as well as the EU Charter of Fundamental Rights.
Denitza Toptchiyska is Associate Professor of Law at the New Bulgarian University, where she teaches courses on General Theory of Law and Information Society Law. She was a Fulbright visiting research scholar at the University of Notre Dame, USA, and a visiting scholar at the University Paris II Panthéon-Assas, France, working on her comparative research project “Law, Governance and New Technologies: US and EU Regulatory Approaches.” In 2023 and 2024, she was a visiting professor at the University of Lyon 2, France, teaching courses on Human Rights and Digital Law. Since 2022 she has been a research fellow at the ISLC – Information Society Law Center of the University of Milan, Italy. Member of GIGANET (The Global Internet Governance Academic Network)
Other articles by Denitza Toptchiyska
Toptchiyska, D. (2023) Legal Aspects of Content Moderation on Social Media Platforms: a Comparative Perspective: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4901501
Toptchiyska, D. (2024) Navigating The GDPR-DSA Nexus: Regulating Personal Data In Social Media and Search Engines: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4948546