Submission
Privacy Policy
Code of Ethics
Newsletter

How the DSA Aims to Protect Freedom of Speech – With Special Regards to Section 14. of the DSA. – Part I.

The known shortcomings in the systems of online platforms stem in part from the (commercially understandable) decisions of service providers to prevent transparency and accountability in the operation of their services and to make it harder than necessary to research their operational and referral systems. These decisions, some argue, pose a serious systemic risk to the exercise of fundamental rights, particularly in light of the fact that most of these decisions are deliberate, deliberately ignoring concerns that they have a significant impact on democratic public access. Social networking sites, as new urban centers where each user has his or her own soapbox, require a whole new set of rules and perceptions. Given that they are global platforms, effective regulation should be sought, mainly but not exclusively, in a transnational form. In recent years, the European Union has recognized the dangers posed by online platforms and has sought to address the problem of protecting free speech through various means. Among these instruments are codes of practices, co-regulations, directives or regulations, most importantly the Digital Services Act (DSA). The DSA significantly enhances the protection of fundamental rights online. It addresses a range of issues impacting online experiences and interactions to ensure that users’ fundamental rights are safeguarded in the digital environment. A key aspect of the DSA is its commitment to preserving freedom of expression and information. This is accomplished by establishing clear rules for the removal of illegal content while safeguarding legal content from arbitrary or unjustified removal. This approach balances the need to combat harmful or illegal online material with the right of users to express themselves and access information freely.

The Act mandates greater transparency from online platforms regarding the use of algorithms, especially those used for content moderation, advertising, and recommendation systems. This transparency is crucial in ensuring that users understand and can control how their data is used and how content is presented to them. According to Article 27, online platforms using recommender systems must set out in clear and plain language the parameters that the algorithm will take into account when making recommendations and inform users in the same way how they can influence or change the use of these parameters. In other words, users should be provided with information explaining why the information is being recommended to them, i.e. what the criteria are that are most relevant to the recommendations. If the platform uses more than one of these recommendation methods, the user should be provided with an easily accessible, direct, and simple way to select and modify his preferred option at any time. An example of the latter is Twitter’s and Instagram’s new news feed, which made it default for all users to use the “For you” page, which shows tweets from users who users follow, but not in chronological order, interspersed with posts that the social media site thinks they will like.

The DSA also provides for transparency in relation to sponsored content, with Article 26 requiring platforms to disclose certain information about their advertisements in a clear, concise, unambiguous, and real-time manner, including the parameters based on which the platform has “targeted” them with the advertisement. This should also be directly and easily accessible from the ad and, where possible, easily modifiable. Platforms should also provide access to their algorithms for certain studies, and certain accredited researchers should be given access to additional data on referral systems.

In principle, these provisions will only help researchers and, of course, to a certain extent, users to gain a deeper understanding of how algorithms work, which has always been kept very secret. While these cannot have a direct impact, and thus cannot provide a clear solution to the bubble effect, they can contribute to a more efficient consumption of information by informed and knowledgeable users and to researchers’ understanding of the different recommendation processes. However, the DSA requirements in this area are rather general and superficial. Facebook, for example, already has a “Why am I seeing this ad?” tab for all content based on recommendations or sponsorships, but this is not sufficient for a full and real understanding of the processes, as it lacks a lot of accurate, truly personalized information. Still, it is likely to meet the conditions of the Regulation.

The DSA is also trying to guarantee content consumption opportunities for less aware users in other ways. Giant online platforms must put in place reasonable, proportionate, and effective risk mitigation measures to address the systemic risks they identify, which may include social polarization or radicalization. Indeed, Article 34 requires service providers operating online platforms and highly popular search engines to “carefully identify, analyze and assess systemic risks arising from the design or operation of their services and related systems, including algorithmic systems, or from the use of their services.” The article also specifies that these analyses should cover a number of systemic risks, including negative impacts on fundamental rights, among which the regulation specifically mentions the fundamental right to freedom of expression and information under Article 11 of the Charter, including “freedom and pluralism of the media” and “any actual or foreseeable negative impact on civil dialogue and on the electoral process and public security”. In the context of these risk assessments, the DSA explicitly mentions the modification of algorithmic systems and recommender systems as a possible solution to the identified risks. While these risk analyses can be useful, there is currently little predictability as to the effectiveness of the identification of these problems and the tools that platforms will put in place to mitigate them. This regulatory solution is clearly positive in terms of the timeliness and adaptability of the legislation, but it is impossible to say at this stage whether it will be sufficiently effective in addressing the various anomalies.

The protection of fundamental rights aspects of the DSA are most prominent in Article 14. The first half of the Article introduces a transparency rule requiring intermediary service providers to provide in their contractual terms and conditions adequate information (clear, simple, comprehensible, unambiguous, and user-friendly) on the restrictions on their services. This information shall include information on all rules, procedures, measures, and tools, including algorithmic decision-making and human review, used for the purpose of content accommodation, as well as on the procedural rules of their internal complaint handling system. Related to the first paragraph, paragraphs 2, 3, 5, and 6 typically add to these requirements: users must be notified of any significant changes, services aimed at minors must draft their terms and conditions in terms that children can understand, giant platforms must provide a summary of their terms and conditions and make their terms and conditions available in the official languages of all Member States in which they provide their services.

Of particular interest, however, is Article 14(4):

(4) Providers of intermediary services shall act in a diligent, objective and proportionate manner in applying and enforcing the restrictions referred to in paragraph 1, with due regard to the rights and legitimate interests of all parties involved, including the fundamental rights of the recipients of the service, such as the freedom of expression, freedom and pluralism of the media, and other fundamental rights and freedoms as enshrined in the Charter.

The paragraph applies to any restrictions on the use of the services imposed by the conditions of use, not only to decisions taken in relation to specific content. Specific decisions are referred to in Article 20, which speaks of decisions taken on the basis that “information made available by recipients of the service constitutes unlawful content or is incompatible with the provider’s contractual terms and conditions”. These specific decisions may include decisions to remove information, restrict its visibility, suspend the user, or terminate monetization. Article 14(4), however, speaks of restrictions set out in the terms of use, i.e. in effect predetermined rules that constrain users in any way.

This provision raises important issues which will be discussed in the second part of this blog post.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled “Regulation of Social Media Platforms in Protection of Democratic Discourses”.

Print Friendly, PDF & Email