Submission
Privacy Policy
Code of Ethics
Newsletter

How the EU Plans to Combat Threats to Democracy on Online Platforms

On Thursday, February 8th, 2024, the European Commission initiated a public consultation process to gather feedback on the proposed guidelines under the Digital Services Act (DSA) focused on preserving the integrity of electoral processes. Marking the first guidelines issued pursuant to Article 35 of the DSA, this initiative targets Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), guiding them in the identification, assessment, and management of any adverse impacts their services may have on the democratic process of elections. By highlighting examples of potential risks and offering a range of mitigation measures, the guidelines aim to foster a safer digital environment during election period, offering them a comprehensive set of best practices and strategies designed to safeguard democratic electoral processes from potential threats.

The guidelines are informed by a variety of relevant legislative frameworks, encompassing initiatives aimed at enhancing transparency in political advertising, the forthcoming AI Act, and comprehensive measures to combat disinformation. By integrating these legislative initiatives, the guidelines aim to establish a robust foundation for digital platforms to operate responsibly, ensuring the integrity of electoral processes and reinforcing the democratic infrastructure.

The provisional guidelines offer a detailed overview of various strategies for mitigating risks associated with electoral processes. These include targeted measures for managing the challenges posed by Generative AI-generated content, strategies for implementing risk mitigation both before and after electoral events, and specialized advice tailored to the elections for the European Parliament. According to Article 35 of the DSA, the European Commission, in collaboration with the Digital Services Coordinators from the member states, holds the authority to publish guidelines addressing specific risks. The purpose of these guidelines is to outline best practices and suggest effective mitigation strategies that can be adopted. This includes the adoption of measures to counteract the spread of disinformation through Generative AI, ensuring the integrity and trustworthiness of content circulated on digital platforms. Furthermore, the guidelines emphasize the importance of proactive and reactive risk management strategies, encouraging platforms to prepare in advance of elections and to remain vigilant and responsive to emerging threats throughout the electoral cycle.

A crucial aspect of these guidelines is the emphasis on enhanced risk identification. Platforms are expected to proactively identify and assess risks that might harm civic discourse and electoral integrity. This includes understanding the platform’s role in political discussions, potential vulnerabilities to exploitation, and ensuring compliance with privacy and data protection laws. By developing a detailed risk profile, platforms can more effectively tailor their mitigation strategies to address the most pressing vulnerabilities.

Mitigation measures form the second pillar of these guidelines, focusing on providing access to official information, promoting media literacy, and offering more contextual information about the content. Directing users to reliable sources helps counter misinformation and ensures voters have access to accurate information about the voting process. Media literacy initiatives are crucial for helping users discern between credible information and misinformation, while contextual information about content enhances platform transparency and credibility.

One of the primary strategies involves analyzing and moderating the virality of content that could undermine electoral integrity. This could be achieved through mechanisms that introduce friction or “circuit-breakers” to the spread of potentially harmful content, such as implementing fact-checking services, warning labels, and adjusting algorithms to prevent undue amplification.

The significant influence of influencers on political discourse and electoral decisions is also addressed. Platforms are encouraged to provide functionalities that enable influencers to disclose any political advertising content clearly, ensuring transparency regarding the origin and nature of such content. Moreover, political advertising should be distinctly labeled to inform users that they are viewing content with political intentions, with such labels remaining visible even when content is shared across the platform. Providers are urged to maintain a transparent, searchable repository of political ads, enabling real-time access to data such as sponsorship details, expenditure, and the scope of ad dissemination.

Additionally, the guidelines recommend the demonetization of disinformation content, preventing financial incentives from fueling the spread of false information around elections. Maintaining the integrity of services is paramount, with platforms advised to establish procedures for detecting and disrupting coordinated inauthentic behaviors, such as the use of botnets or deceptive practices by impersonating candidates or manipulating media content.

Recognizing the dynamic nature of online risks to electoral processes, the guidelines call for a rigorous and critical analysis of mitigation measures. This involves developing performance metrics to assess the effectiveness of strategies employed, emphasizing the need for measures to be specific, measurable, achievable, relevant, and time-bound (SMART). Third-party scrutiny and collaboration with researchers are crucial for ensuring the measures’ effectiveness and adherence to fundamental rights.

Regarding political advertising, platforms are to ensure that their systems allow for meaningful research into political advertising campaigns, conforming to EU law and data protection standards. Public transparency about the design and implementation of mitigation measures is also essential, particularly during electoral periods, to ensure fairness and prevent bias.

Fundamental rights considerations are at the core of these guidelines, urging platforms to consider the potential impact of their measures on rights such as freedom of expression and information, including media freedom and pluralism. This includes paying attention to the effects of addressing illegal content that may suppress voices from marginalized groups or minorities. The approach underscores the importance of involving relevant stakeholders throughout the risk assessment process and beyond, fostering open dialogue on good practices and potential improvements. The role of journalists and media service providers in delivering fact-checked, trustworthy information is highlighted as critical to the effective functioning of electoral processes. Collaboration with authorities and stakeholders is essential for the effective implementation of these guidelines. Regular communication with election authorities and engagement with civil society organizations and independent researchers provide platforms with valuable insights for identifying and mitigating risks.

Addressing the risks associated with generative AI is another significant concern. Platforms are advised to clearly label AI-generated content to ensure it is distinguishable for users. This might include watermarking and adding metadata. Ensuring that AI-generated information relies on trustworthy sources and implementing safety measures to prevent the spread of misleading content are recommended practices. Post-election reviews are advised to assess the effectiveness of the mitigation measures employed. Platforms are encouraged to conduct internal assessments to determine whether their strategies were successful and identify areas for improvement. Sharing these insights publicly enhances transparency and contributes to the collective understanding of best practices in digital platform governance.

While the guidelines represent a significant and well-intentioned effort to protect the integrity of elections, concerns persist regarding their practical effectiveness. The rapid evolution of digital platforms, alongside the innovative methods employed by those seeking to disrupt democratic processes, poses a continuous challenge to the static nature of regulatory measures. Although the guidelines offer a robust framework for identifying and mitigating risks, their real-world application will require ongoing adaptation, rigorous enforcement, and a commitment to collaboration among all stakeholders involved.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”

Print Friendly, PDF & Email