Submission
Privacy Policy
Code of Ethics
Newsletter

How the DSA Aims to Protect Freedom of Speech – With Special Regards to Section 14. of the DSA. – Part II.

It is clear that the way social networking sites operate represents a unique concentration of power over the right to freedom of expression for billions of people. It has been pointed out that, through the self-serving interpretation of terms of use or the use of recommendation systems, platforms can in fact invisibly influence the outcome of an election. Kinga Sorbán also points out that content recommendation algorithms, in addition to the free flow of opinions, can also negatively influence access to information and media pluralism, prevent certain valuable opinions from reaching many people, and even reinforce discrimination between certain groups in society.

However, all these are mainly indirect effects of the functioning of content-sorting algorithms and do not limit users’ rights in relation to the platform. The algorithmic decision-making mentioned in Section 14(1) can only be considered as a restriction in the context of content aggregation, but the question may arise whether automated information aggregation can be such an activity while respecting the fundamental rights of users and the diversity of mass media. In the light of the DSA’s provisions on transparency and user autonomy, as described above, the terms of use should not only include algorithmic parameters for content aggregation, but also for recommender systems, and it could easily be concluded (in particular because of its impact on freedom of expression) that any form of algorithm-based decision-making is in breach of the DSA. Based on the text of the Regulation, however, it is more likely that the protection under Article 14 only applies to algorithms used in the moderation of content. Paragraph (1) speaks by definition of the tools used in the moderation of content, so it does not cover the sorting of information, but rather protects against “over-moderation” by platforms, i.e. to prevent the removal of otherwise lawful information by the provider in the context of filtering content that is deemed illegal or in breach of contract.

The preamble also states that providers of online platforms “take into account the rights and legitimate interests of the recipients of the service, including fundamental rights as enshrined in the Charter. For example, providers of very large online platforms should in particular pay due regard to freedom of expression and of information, including media freedom and pluralism. All providers of intermediary services should also pay due regard to relevant international standards for the protection of human rights, such as the United Nations Guiding Principles on Business and Human Rights.” A similar rule is also found in the EU Regulation on combating the dissemination of terrorist content online, Article 5 of which states that when combating terrorist content, intermediary service providers must, inter alia, pay due regard to the fundamental rights of users and take into account “in particular the fundamental importance of freedom of expression and information in an open and democratic society”.

In the traditional approach, the State is the one whose task is the protection of fundamental rights. The enforcement of fundamental rights between individuals is a matter of interpretation. As social networking site operators are not state actors, they are not in principle subject to the constitutional requirements of freedom of expression (they cannot directly invoke the constitutional guarantees of freedom of expression). In this case, therefore, a distinction must be made between the vertical (i.e. between the individual and the state) and horizontal (between the individual and the individual) scope of fundamental rights. In the first case, the action to be taken in the event of infringement is clear, since the primary addressee of fundamental rights is the State. In the final case, however, any failure to act can also be attributed to the State when examining the horizontal application of fundamental rights. The limits of the protection of fundamental rights against third parties must be established by the legislator, so that when a third party violates a fundamental right, it may have done so because of the legislator’s incomplete or inadequate regulation or subsequent inadequate enforcement of the law, and the violation of fundamental rights is therefore also committed by the State. The question therefore arises whether the state’s duty to protect institutions can be extended to the area of social networking sites, i.e. whether the state may be obliged to create rules that specifically guarantee freedom of expression in social media. Since the DSA refers to international standards and UN directives for the protection of human rights, it can be stated on the basis of the interpretation of fundamental rights by international courts that Article 14(4) of the DSA raises the horizontal scope of fundamental rights in the relationship between online platforms and users to the level of EU regulation.

In the context of recommender systems, this fundamental rights issue can arise from two sides, the content provider and the content consumer. On the side of the content publisher, the issue of freedom of expression is more likely to arise in the context of content moderation, and less so in the context of recommender systems. In the latter case, the platform allows the publication of content, but under the fundamental rights approach to freedom of expression, it cannot be held accountable for the audience to which it is distributed. The question of access to information may arise in the context of a fundamental rights violation on the part of the content consumer. The European Union itself has recognized that the development of the digital environment can play a key role in citizens’ access to information online, and today the issue of disinformation is the main concern of social networking sites in relation to access to information, on which UNESCO launched a specific campaign at the beginning of 2023.  However, in the context of social networking sites and the Internet, it is difficult to argue that content recommendation algorithms would prevent citizens from accessing information, since the information is available on the platforms and can be accessed by anyone through search or direct interaction. However, even if algorithms cannot prevent access to information, they can make it more difficult to find information from a variety of different sources, since the average user uses the platform as a passive consumer, it is unrealistic to visit each source of information on the platform individually. In this context, therefore, questions relating to the obligation of diversity of information are more relevant.

A common understanding is that substantial improvement could be achieved in society against the phenomenon of filter bubbles if platforms were to adapt their algorithms so that the masses of users are exposed to more diverse content. This is presumably the aim of the DSA, which is to ensure that operators of online platforms pay particular attention to freedom of expression and information, including freedom and diversity of the mass media, and that their risk assessments identify and address any threats to freedom and diversity of the mass media.  These provisions therefore require platforms to safeguard and guarantee the diversity of the mass media, ensuring, inter alia, that they have the means to achieve balanced information. This proposed solution is based on the principle that diverse content can reduce polarization. 

Many scholars on the subject argue that “bursting” the individual filter bubble necessarily improves society by allowing more informative individual choices. However, taking into account the bias of traditional media, the overall picture becomes more complex and the above conclusion may not be correct. This ignores the phenomenon of hostile news bias, which, despite the idea that filter bubbles can be neutralized by introducing other types of news, research shows that users perceive otherwise neutral, objective information that contradicts their worldview as hostile and reject it. 

For example, a study published by a US research team looked at how users react to opposing views on Twitter. If the subjects were Republicans, they were offered a lot of Democratic content on Twitter and vice versa. After examining attitudes before and after, they found that the messages from the opposing side caused users’ attitudes to become more polarized and more deeply affirmed in their original views.

The issue of diversity of information is also a regular topic of debate in traditional media, and the practical problems associated with it are magnified in the online space. How can diversity be measured? What are the indicators that can be used to determine whether an online platform’s service meets the criteria for diverse mass media? What exactly is the diversity of mass media in the online space? Obviously, it is impossible to expect an online platform to balance the content presented to each user on a pharmacopoeia scale along the lines of political bias, but beyond the aforementioned neutrality, what are the positive desiderata that we can expect from platforms? And then there is the next big question: the verifiability of the measures taken. After all, online service providers and advertisers have built their services almost entirely around personalized user experience, to the point where there is no longer any objective way of verifying the information displayed.   While everyone sees the same thing in a traditional media stream—including, where appropriate, the authorities monitoring the area—the mix of content on social media is different for each user, making it impossible to have a central, universal content monitoring system.

We have now entered the age of the “networked information economy”, where information is usually produced in a decentralized way by users, and because it is free, a very large number of people take advantage of this possibility, making it almost impossible to control what is displayed and how it is displayed for each user. In conclusion, Article 14 of the DSA undoubtedly strengthens the enforcement of fundamental rights in the online space, but the precise content and scope of its provisions are still questionable.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.

Print Friendly, PDF & Email