
The EU Digital Services Act and Freedom of Expression: Friends or Foes?
Although the EU Digital Services Act (DSA) was enacted with the stated goal of achieving “greater democratic control,” its global impact on free speech has not been properly assessed.
The DSA, which took effect last February, is a legally binding regulatory framework that gives the European Commission authority to enforce “content moderation” on very large online platforms and digital service providers (more than 45 million users per month) established in or offering their services in the European Union (EU).
The European Commission claims that the DSA creates “legal certainty,” “greater democratic control,” and “mitigation of systemic risks” such as manipulation or disinformation. It promises to create a safer online space by holding digital platforms—particularly “Very Large Online Platforms” (VLOPs) such as Google, Amazon, Meta, and X—accountable for addressing these terms.
At a closer look, however, the DSA has a significant potential to limit free speech globally.
The Scope of the DSA
The DSA requires platforms to censor “illegal content,” which it broadly defines as anything that is not in compliance with EU law or the law of any Member State (Article 3H). This could result in the lowest common denominator for censorship across the whole EU, which would not be in line with international law standards that require any restrictions on speech to be precisely defined and necessary. Additionally, this would open the door to cross-border takedowns, as per the Glawischnig-Piesczek case, or even worldwide takedowns, as per the Google LLC case, both decided by the Court of Justice of the EU.
The DSA also relies on the EU Framework Decision of 28 November 2008, which defines “hate speech” as incitement to violence or hatred against a protected group of persons or a member of such a group. This circular definition of “hate speech” as incitement to hatred is problematic because it fails to specify what “hate” entails.
Due to their vague and subjective nature, “hate speech” laws lead to inconsistent interpretation and enforcement, relying more on individual perception rather than clear, objective harm. Furthermore, the lack of a uniform definition at the EU level means that what is considered “illegal” in one country might be legal in another, making the possibility of the lowest common denominator for censorship across Europe very tangible. According to the then Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye: ‘the vagueness of hate speech and harassment policies has triggered complaints of inconsistent policy enforcement that penalizes minorities while reinforcing the status of dominant or powerful groups.’ (A/HRC/38/35)
“Misinformation” and “information manipulation” are even more difficult to define, interpret, and apply. Although the DSA does not use “misinformation” in its main articles, it references it no less than 13 times in the recitals. “Misinformation” is also central, as a key phenomenon to counter, when the DSA sets out, in Article 36, the crisis response mechanism to extraordinary circumstances that pose a serious threat to public safety or health.
The Mechanisms Put Forth by the DSA
Under the DSA, tech platforms must act against “illegal content,” removing or blocking access to such material within a certain timeframe.
- Content is policed by so-called “trusted flaggers,” including NGOs, private entities, and may even include law enforcement agencies like Europol. This deputizes organizations with their own agendas to enforce censorship at scale.
- This network of “flaggers” report content that they deem “illegal” to the platform. The platform must prioritize flagged content for removal. If the platform deems the content illegal, it must quickly remove it or disable access (by geo-blocking or hiding visibility).
- Very large platforms are obliged to proactively prevent “illegal content” by conducting regular risk assessments to identify how their services may spread “illegal content”.
- Very large platforms must undertake various proportionate efforts for “mitigation of risks,” which are listed under Article 34 to include “negative effects on civic discourse and electoral processes, and public security” and “effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being”. The efforts include adapting their design, terms and conditions, algorithmic systems, advertising, content moderation, including for “hate speech,” and awareness-raising measures.
Penalties for Non-Compliance with the EU Digital Services Act
The penalties for failing to comply with the EU Digital Services Act are severe.
Non-compliant platforms with more than 45 million active users could be fined up to 6% of their global annual turnover. For tech platforms like Google, Amazon, Meta, and X, this means billions of euros.
If a platform repeatedly fails to comply with the DSA, the European Commission can impose a temporary or permanent ban, which could result in the platform’s exclusion from the EU market entirely. For platforms that rely heavily on this market, this would mean losing access to one of the world’s largest digital markets.
The risks are high, and tech platforms will scramble to ensure they comply—sometimes at the expense of the fundamental right to free speech.
The DSA and Freedom of Expression
Because the DSA applies to VLOPs and search engines accessed within the EU, but with a global presence, the entire world is impacted by the DSA. Through its global scope, the application of the DSA must be assessed regarding freedom of expression, as enshrined in Article 11 of the Charter of Fundamental Rights of the EU (EU Charter), Article 10 of the European Convention on Human Rights (ECHR), and Article 19 of the International Covenant on Civil and Political Rights (ICCPR). Under these conventions, any limitation to freedom of expression must be provided by law, proportionate, legitimate, and necessary in a democratic society. Although the DSA explicitly prohibits general monitoring obligations (Article 8), and users have a possibility to resort to internal complaints and out-of-court dispute resolution mechanisms, through its plethora of content flaggers, national coordinators, monitoring reporters, and other authorities, the DSA could lead to the wide-sweeping removal of online content under either: a) the mistaken qualifications of “illegal content” or b) the lowest common denominator of “illegal content” in one Member State of the EU.
Concerns Expressed in the Political Realm
The January 2025 European Parliament plenary session debate regarding the enforcement of the DSA, brought to light significant concerns across the political spectrum about how the DSA may impact freedom of speech and expression.
Several members of the EU Parliament (MEPs), who initially favoured the legislation, raised serious objections to the DSA, with some calling for its revision or annulment. MEPs like French MEP Virginie Joron, referred to the DSA as the “Digital Surveillance Act”.
Despite intense opposition, the EU Commission representative and the Council of the EU representative promised to enforce the DSA more rigorously. They vowed to double down on free speech by enforcing more thorough fact-checking and anti “hate speech” laws “so that “hate speech” is flagged and assessed [within] 24 hours and removed when necessary”.
They failed to provide comprehensive responses to the concerns raised about the DSA’s potential to erode fundamental rights, leaving critical questions about its implementation and implications unresolved.
Similar concerns regarding the DSA and free speech were raised in the Parliamentary Assembly of the Council of Europe (PACE) debate on Report 16089 on Regulating content moderation of social media to safeguard freedom of expression, which was adopted on 30 January 2025.
Conclusions
The DSA’s broad definition of “illegal content,” coupled with references to vague concepts such as “misinformation,” “disinformation,” and “hate speech,” is too wide to serve as a proper basis for limiting speech. Allowing “illegal content” to potentially be determined by one country’s vague and overreaching laws pits the DSA against international law standards that require any restrictions on speech to be precisely defined and necessary, and opens the door to worldwide takedowns of content.
By placing excessive pressure on platforms to moderate content, the DSA risks creating an internet governed by fear—fear of fines, fear of bans, and fear of expressing one’s views. If the DSA is allowed to stifle open dialogue and suppress legitimate debate, it will undermine the very democratic principles it claims to protect.
[1] This post is based on the analysis which appeared originally on the ADF International website, under the name How the EU Digital Services Act (DSA) Affects Online Free Speech in 2025, https://adfinternational.org/commentary/eu-digital-services-act-one-year
Adina Portaru is Senior Counsel for ADF International in Brussels, where she focuses on freedom of religion or belief and freedom of expression at the European Union and on litigation at the European Court of Human Rights. Prior to joining ADF International, she was a research assistant at Maastricht University in the Netherlands and at the European Training and Research Centre for Human Rights and Democracy in Austria, where she assessed human rights policies. She obtained her doctorate in Law and Religion at Karl Franzens University in Austria.