
The Rise of Digital Platforms’ Power and the EU’s Regulatory Gamble with the DMA and DSA
The European Union (EU) is staking its claim as the world’s digital rulemaker. With the Digital Markets Act (DMA) and the Digital Services Act (DSA), the EU has moved beyond sectoral reform and adopted an active and preventive, value-based model of governance for the digital sphere. However, as enforcement unfolds, deeper questions are emerging: Can the EU’s regulatory vision harmonize fragmented laws, safeguard users’ rights, and foster competition without sacrificing innovation? And what lessons, if any, does this model offer for the global digital order?
The Brussels “Template” Effect in Action
The Digital Services Act Package – comprised of the DMA and DSA – marks another decisive turn in the EU’s approach to digital regulation. Rather than relying solely on traditional competition law or data protection frameworks, these twin regulations introduce a proactive, sector-specific, and asymmetric regime for governing the power of online platforms within the EU, with effects extending far beyond its borders.
The DMA, applicable since May 2023, targets the so-called “gatekeepers”: large platforms designated based on their entrenched and durable market position and cross-market presence. Notably, it imposes behavioral obligations – dos and don’ts – to ensure fair and contestable digital markets. Meanwhile, the DSA, which became fully applicable as of February 2024 (with earlier obligations for Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) since August 2023), addresses risks related to illegal content, algorithmic opacity, and systemic online harms.
Together, the DMA and DSA constitute a new chapter in the EU digital constitutionalism, resting on the premise that “bigness” entails greater responsibility. They do not merely patch legal loopholes; they recalibrate the EU’s digital ecosystem around normative principles like fairness, transparency, and user empowerment.
Legal Harmonization or Regulatory Overlap?
A key ambition of the Digital Services Act Package is to harmonize digital regulation across the EU. Yet, the introduction of these regulations has reiterated the debate over potential overlaps and contradictions with existing EU laws, particularly competition law and data protection frameworks.
On the one hand, the DMA is explicitly stated to complement, rather than replace, EU competition law. It introduces ex ante rules that sidestep traditional economic analysis and the efficiency-oriented consumer welfare test, shifting enforcement from national antitrust authorities to centralized EU oversight. Critics argue this creates legal ambiguity: is the DMA truly a distinct traditional competition law regulatory instrument, or is it merely an accelerated form of sector-specific antitrust? This ambiguity is not just theoretical. National authorities in Germany, Italy, and Belgium have continued to legislate or enforce complementary competition rules, raising questions about double enforcement and the ne bis in idem principle. As the Court of Justice of the European Union (CJEU) recently clarified in bpost (C-117/20) and Nordzucker (C-151/20), overlapping sanctions for similar conduct must be justified by genuinely distinct legal interests, a standard that may prove difficult to uphold in practice.
On the other hand, the Digital Services Act Package introduces a new layer of tension with the EU’s existing data protection framework. Both the DMA and the DSA rely on core concepts embedded in the General Data Protection Regulation (GDPR)—especially the principle of informed user consent—but do so in fragmented ways. For example, the DMA requires GDPR-grade consent for data cross-use between services but offers little guidance on how this consent should be obtained, structured, or communicated. Meanwhile, the DSA imposes algorithmic transparency and content moderation obligations that overlap with GDPR responsibilities assigned to data controllers. These parallel duties are procedurally distinct, enforced by different authorities, and risk generating inconsistent interpretations of what “meaningful” consent and transparency entail in practice.
The result is an intricate framework of compliance duties spread across multiple regimes, without a unified interpretive framework. This scenario is further aggravated by the fact that, while the GDPR is enforced by national Data Protection Authorities (DPAs), the DSA introduces Digital Services Coordinators (DSCs), and the DMA centralizes oversight within the European Commission (EC or Commission). Without coordination, this polycentric architecture could undercut rather than reinforce trust and coherence in digital governance.
Consent, Algorithms, and the Struggle for Digital Autonomy
While the Digital Services Act Package aims to restore user control, its implementation reveals a deeper structural issue: the cognitive and emotional burden placed on individuals when navigating fragmented, often opaque digital environments.
The DSA’s algorithmic transparency obligations represent a significant shift. Platforms must now explain how content is prioritized and offer users the ability to opt out of profiling-based recommender systems. These provisions echo broader EU ambitions to enhance informational self-determination. However, in practice, formal transparency does not guarantee meaningful comprehension. Explanations are often buried in jargon or obfuscated by legal disclaimers. Users remain largely unaware of how their data is used to influence their behavior, and even when informed, they lack viable alternatives.
This situation is exacerbated by the widespread use of dark patterns: interface designs that manipulate users into making choices that are misaligned with – or even contrary to – their interests. These include misleading button hierarchies, pre-checked boxes, or emotional nudges, such as guilt-inducing messages or fake urgency. While the DSA prohibits platforms from designing their interfaces in ways that deceive or manipulate users (Article 25), its jurisdiction is limited. The Regulation explicitly excludes conduct already governed by the GDPR or the Unfair Commercial Practices Directive (UCPD), creating enforcement gray zones. The standard used to evaluate deception – the “average consumer” under the UCPD – fails to account for digital-specific vulnerabilities, such as algorithmic amplification and behavioral manipulation.
To address these limitations, scholars have proposed a shift toward digital fairness by design, a model that treats all users as potentially vulnerable and places the burden on platforms to ensure their interfaces support informed, autonomous decision-making. This aligns with the ethical principle of informational self-determination: the right not merely to access data, but to understand and control how one is profiled, nudged, and governed by algorithmic systems.
The EU’s current regulatory framework gestures in this direction but falls short of delivering on its emancipatory promise. Without complementary obligations, such as standardized consent formats, usability audits, or independent interface testing, users remain trapped in a system optimized for formal compliance rather than real empowerment. The struggle for digital autonomy is thus no longer about whether users aregiven choices, but about how those choices are structured, and who ensures they are real.
Protecting Users Against the Rise of Private Power
The DSA is, at its core, a political response to the rising influence of private actors over public discourse. By regulating content moderation, recommender systems, and the use of personal data in digital advertising, the DSA aims to ground online governance in European constitutional values, particularly freedom of expression. This becomes evident in the obligation for VLOPs and VLOSEs to conduct systemic risk assessments under Article 34, as well as other key provisions, such as Recitals 79 and 84, which highlight social harm related to platform design and potentially harmful content, respectively.
Yet the risks of over-delegation to platforms remain. Content moderation systems, especially when automated, can reinforce existing biases or suppress lawful expression. The DSA’s definition of “illegal content” defers to national law, raising concerns about inconsistent enforcement across Member States. And while platforms must now provide notice and redress mechanisms for removed content, the line between harmful and protected speech remains a delicate balance.
The challenge is not just legal but epistemic: How can regulatory design accommodate the limitations of current technologies? How can platform accountability be ensured without granting them quasi-sovereign powers to decide what speech is permissible? The DSA begins to grapple with these tensions, but much will depend on how these provisions are implemented.
Innovation at a Crossroads
Perhaps the most controversial aspect of the DMA is its potential impact on innovation. While it seeks to dismantle entrenched market power, it also imposes restrictions on data use, interoperability, and default configurations that some argue will constrain product integration and technological development.
This is not just a question of economics but one of regulatory philosophy. The EU has adopted a precautionary model of digital governance, prioritizing fairness and contestability over maximal innovation. But this approach may come at a cost. The EU already lags behind the US and China in key innovation indicators, from R&D investment to AI leadership. Whether the DMA and DSA will bridge or widen that gap remains to be seen.
Moreover, the regulations’ focus on US tech giants (GAFAM—Google, Apple, Facebook/Meta, Amazon, Microsoft) has drawn criticism from across the Atlantic. Out of the seven designated DMA gatekeepers, five are based in the US, with only one European company (Booking.com) on the list. Similarly, out of nineteen DSA VLOPs/VLOSEs, seventeen are headquartered in the US. While legal scholars continue to debate potential WTO implications, the broader geopolitical significance of this asymmetry cannot be ignored.
From Theory to Practice: The Commission’s First Tests of Gatekeeper Compliance
After years of regulatory design and political debate, the DMA has entered the implementation phase, and the EC has wasted no time testing its teeth. Since March 2024, all designated gatekeepers have been required to comply with all DMA obligations. By March 2025, each had submitted updated compliance reports outlining the measures adopted to ensure alignment with the regulation, with ongoing updates required as their services evolve.
These reports offer the first real-world glimpse into how digital giants interpret and operationalize their new responsibilities. Under Article 11 of the DMA, gatekeepers must describe, in a detailed and intelligible manner, how they meet each obligation, from interoperability of messaging services to bans on self-preferencing in ranking systems. While the reports are publicly available, the EC’s assessment is ongoing and largely opaque. Nonetheless, preliminary scrutiny suggests important fault lines are emerging.
The EC has already raised concerns about the sufficiency of some compliance strategies. In particular, questions loom over Apple’s approach to alternative app stores and sideloading on iOS, Meta’s handling of cross-service data portability, and Google’s adjustments to search ranking algorithms. Critics argue that many of these “compliance” measures are minor design tweaks that preserve underlying market power. The risk is that platforms may nominally satisfy DMA requirements while maintaining business models that stifle competition and entrench user lock-in, which some scholars describe as “compliance minimalism”.
To address these risks, the EC has begun organizing multi-stakeholder compliance workshops, inviting researchers, regulators, and civil society to evaluate the effectiveness of gatekeeper strategies. This participatory approach reflects an acknowledgment that enforcement cannot rely solely on doctrinal interpretation or paper-based reporting. Instead, it must examine how platforms function in practice and whether user outcomes align with regulatory goals.
The early signs are mixed. On the one hand, gatekeepers are responding by publishing transparency reports, adjusting interfaces, and opening (in theory) parts of their ecosystems. On the other hand, structural questions persist: Can interoperability be meaningful if APIs are poorly documented or technically restrictive? Can recommender systems be “transparent” if opt-out options are buried or phrased ambiguously? Can data portability work if switching costs remain high?
What is emerging is a deeper realization that compliance is not a checkbox exercise. It is a process of institutional learning, adversarial engagement, and political resolve. The Commission’s willingness to issue fines, launch investigations, or challenge creative evasions will determine whether the DMA becomes a living instrument or just a symbolic one.
More broadly, these initial steps serve as a test for the EU’s ambition to export its regulatory model. If the bloc can show that asymmetric rules can be implemented effectively, even against the world’s largest Big Tech Companies, it strengthens the case for digital constitutionalism as a viable governance paradigm. But if enforcement falters, critics may argue that the EU’s approach is bureaucratically burdensome and economically naïve.
The coming months will be decisive. As the Commission prepares its first official evaluations and enforcement decisions, the world is watching not only what the DMA says, but what it actually does. These early enforcement efforts offer a first real-world test of the EU’s vision of digital constitutionalism—one that seeks to translate fundamental rights, democratic accountability, and rule of law principles into platform governance.
Conclusion: A Laboratory for the Future
The DMA and DSA do not simply adjust existing legal frameworks; they reimagine the regulatory DNA for the EU digital economy. Whether they succeed will depend not only on their legal robustness but also on their ability to evolve in tandem with technological and geopolitical realities.
With enforcement now underway, the spotlight has shifted from legislative ambition to regulatory credibility. The Commission’s engagement with gatekeepers, its willingness to act on non-compliance, and its coordination across enforcement bodies will shape not only platform behavior but also the EU’s own role in the global digital order.
If successful, the EU may indeed become the constitutional laboratory of the digital age, demonstrating that fairness, transparency, user trust, and contestability can be operationalized at scale. If not, the bloc risks entrenching complexity without achieving its core goals of trust, safety, and competitiveness.
As implementation unfolds, three questions remain front and center:
- Can regulatory clarity emerge from overlapping legal obligations?
- Will fundamental rights be protected without empowering platform censorship?
- Can the EU reconcile digital sovereignty with its global innovation leadership?
The answers to these questions will define not just the success of the DMA-DSA framework but the broader trajectory of democratic digital governance. As other jurisdictions—from the UK’s Online Safety Act to proposed US antitrust reforms—watch the EU’s experiment unfold, the stakes extend far beyond Brussels. The transition from regulatory ambition to practical accountability will determine whether the EU’s digital constitutionalism becomes a global template or a cautionary tale of regulatory overreach.
Carlotta Buttaboni holds a five-year integrated Master’s degree in Law (combined LL.B./LL.M. equivalent) from the University of Bologna, Alma Mater Studiorum. She is currently serving as Postgraduate Research Associate at Yale University’s Digital Ethics Center and Editor-in-Chief of the University of Bologna Law Review, a student-run, open-access, and double-blind peer-reviewed law journal published by the University of Bologna, Department of Legal Studies.
Giuseppe Colangelo is the Jean Monnet Professor of European Innovation Policy at the University of Basilicata and an ICLE nonresident scholar and academic affiliate. He also serves as an adjunct professor of markets, regulation, and law at Luiss University and of legal issues in marketing at Bocconi University. In addition, he is a Transatlantic Technology Law Forum (TTLF) Fellow at Stanford University and is the scientific coordinator of the Research Network for Digital Ecosystem, Economic Policy, and Innovation.
Luciano Floridi (Laurea, Rome La Sapienza, M.Phil. and Ph.D. University of Warwick) is the Founding Director of Yale University’s Digital Ethics Center and Professor in the Practice in the Cognitive Science Program. Before joining Yale, he was the OII Professor of Philosophy and Ethics of Information at the University of Oxford. He has published more than 300 works on the philosophy of information, digital ethics, the ethics of AI, and the philosophy of technology.