János Tamás PAPP: Media Literacy: What Is It and How Can It Do More Harm than Good in the Fight against Disinformation? (Part II.)

Media literacy is vital in combatting disinformation, enabling the public to critically assess online information and identify trustworthy sources, thereby making informed decisions. This education is essential for both adults and children and should be considered a continuous process due to the ever-evolving digital landscape. While media literacy is generally promoted as a solution to combat disinformation, it’s worth considering the ways it might inadvertently exacerbate the issue in certain contexts.

While media literacy, or critical thinking, is seen as a remedy against disinformation, its effectiveness is debated. The UK Commission on Fake News found that only 2% of children possess the necessary critical literacy skills to discern genuine news from fake. In Finland, media literacy is taught in schools, but the curriculum is limited in scope, and initiatives targeting younger audiences might not have a substantial impact. Voluntary media literacy courses may only attract those already discerning, missing the broader audience. Disinformation often capitalizes on psychological factors, like social identity, and biases where people believe they’re knowledgeable when they’re not. Some studies indicate that media literacy might not be sufficient in combatting these biases.

Education about media literacy may have several unintended consequences, one of which is that it may lead to individuals developing an inflated sense of confidence in their ability to differentiate between reality and fiction. After receiving training, some people may assume they are immune to disinformation, which may cause them to be less alert and potentially more vulnerable to sophisticated misinformation efforts. As they become more adept at spotting certain forms of misinformation, they might become overconfident in their abilities, leading them to let their guard down and overlook more subtle or sophisticated forms of disinformation. This hubris can be especially dangerous as it might lead people to unknowingly propagate false information, believing they are championing the truth. Overconfident people are more likely to visit websites that are not trustworthy, fail to differentiate between true and false claims about current events when responding to survey questions, and report a greater willingness to like or share false content on social media, especially when it is politically congenial. Behavioral data, survey questions, and website visits all show that overconfident people engage in these behaviors. This can result in what is known as the “third-person effect“, which is when individuals believe that other people are more influenced by the messages sent by the media than they are, and as a result, they underestimate their own degree of vulnerability.

It is also important to consider the possibility that those who spread disinformation will adjust and progress in response to the widespread efforts to improve media literacy. They could craft narratives that are even more complex and nuanced, utilizing the very principles that are taught in media literacy classes, in order to make their content appear to have a higher level of credibility. A research made in the US and India has also shown that although in the short term media literacy can be an effective tool in the ability to distinguish between fake and real news, this effect erodes rapidly and although over time the ability to recognize fake news is reduced, the news reader remains confident in his or her ability to tell what is real and what is not. In addition, the research showed that there was a small but measurable increase in the level of distrust of real news.  

Media literacy can therefore also increase cynicism about the news in general. When individuals are consistently taught to question and doubt media sources, it might result in a blanket distrust of all media, even reputable outlets. This skepticism can create a void where people no longer know whom to trust, pushing them towards echo chambers or fringe sources that reinforce pre-existing beliefs, irrespective of their veracity. While skepticism is a healthy attitude toward information consumption, over-skepticism can lead to the rejection of legitimate sources of information. As people become more versed in identifying potential biases, they may develop a tendency to view all news sources, including credible ones, as inherently biased or agenda-driven. This can create an environment where factual information is met with undue skepticism, rendering individuals more susceptible to conspiracy theories and baseless claims. In other words, the very tool meant to shield against misinformation might render some unable to accept any information at all.

In 2014, Scott Bedley witnessed one of his fifth-grade students mistakenly present the historical figure Ferdinand Magellan as having sailed around the world in 1972 during a class project. The student had sourced this information from Google. Recognizing the need to teach students about discerning information reliability, especially in the age of “fake news”, Bedley introduced guidelines for his students to validate the authenticity of online information, such as checking copyrights, verifying with multiple sources, examining the credibility of the source, and he developed engaging classroom games that challenge students to distinguish between genuine and fabricated news articles. But how easily this can lead to students questioning everything the teacher says, losing faith in the teacher’s words, and, after a few Google searches, starting to quote articles that are completely at odds with the school curriculum. For example, that the Earth is flat. Of course, it is possible to differentiate between the healthy and unhealthy pedagogical approaches of fostering skepticism in pupils and encouraging them to engage in critical inquiry. However, if the sole takeaway of media education is to “question everything,” it could potentially lead to cynicism.

As noted by Jonathan Jarry, an expert in medical misinformation, simply questioning everything without a framework of media literacy and information verification can foster a tendency towards conspiracy thinking. Engaging in an extensive questioning process without adhering to a structured approach for assessing evidence, which is facilitated by the valuable aspect of skepticism, has the potential to steer individuals towards the realm of conspiracy theories. According to a study made by the Canadian Centre for Media Literacy, in order to mitigate the risk of individuals succumbing to cynicism, it is imperative to promote a transition from multiplism to the evaluative perspective. By adopting an evaluative stance, individuals acknowledge the necessity of reconciling both objective and subjective perspectives of the world through the application of critical thinking. It is acknowledged that while attaining complete knowledge of the world is highly improbable, individuals have the capacity to develop valuable depictions or frameworks of the world. Moreover, it is recognized that certain depictions or frameworks are superior in terms of accuracy compared to others, as they are grounded in evidence and logical reasoning. The evaluativist perspective places greater emphasis on identifying reliable sources and approaching media content with an unbiased mindset, as opposed to engaging in debunking efforts or seeking out negative consequences or hidden motives.

Lastly, there’s the broader societal risk associated with how media literacy is presented. If it is framed as a skill that only some possess, it could further deepen societal divides. Those who consider themselves “media literate” might look down on those they deem less informed, creating an elitist attitude that only widens the gap between different segments of society. Such divides can be exploited by those seeking to spread misinformation, using the very concept of media literacy as a wedge. This latter fear, however, increasingly appears to be untrue, as multiple studies and reports have been published to refute it, but the remote possibility of this danger certainly remains.

As Sonia Livingstone argues, in the complex realm of media and the information age, media literacy is often seen as a simple solution to a myriad of issues like hate speech, cyberbullying, and fake news. Many look to education as a means to equip the public to navigate the digital landscape. However, the reality of implementing media literacy is more challenging. For one, education requires a significant investment in terms of time, resources, and infrastructure. There’s also the challenge of reaching adults not within the traditional educational framework. Moreover, while education can be seen as a great equalizer, it often amplifies existing inequalities by benefiting those already privileged. As our lives become increasingly digitized, the scope of media literacy expands, raising questions about what areas to prioritize and how to teach a constantly evolving digital landscape. This also brings into focus the challenges related to the infrastructure and evidence-based practices within the media literacy community. Another significant challenge lies in the politics of media literacy. Calls for increased media literacy often place the responsibility on the individual, which can lead to blaming them for the digital environment’s shortcomings. To truly harness the potential of media literacy, a more holistic approach is necessary. This approach should identify clear roles for all stakeholders and embed media literacy into the foundation of digital organizations. Lastly, the objective should not just be to create obedient online citizens but to foster a space for active, debating, and even dissenting voices in the digital realm.

In conclusion, addressing online issues like disinformation, media literacy should not be solely problem-centric; its long-standing efforts extend to broader citizenship aspects. A holistic, long-term approach, spanning over a decade, is necessary to elevate overall media literacy, inherently bolstering defenses against online disinformation. In essence, while media literacy is undeniably valuable, it is essential to approach its promotion with nuance and awareness of its potential limitations in the context of disinformation. One must also appreciate that media literacy is dynamic, evolving with the rapid changes in technology and media landscapes. As new platforms emerge and old ones transform, the challenges and opportunities they present require a continuously adaptive approach to media literacy. While media literacy remains a vital tool in the fight against disinformation, it is essential to be aware of its potential pitfalls. Like any tool, it is not the solution in itself but part of a broader strategy. By understanding and addressing these unintended consequences, we can ensure that media literacy serves its intended purpose: creating an informed and discerning public that can effectively navigate the complex information landscape of the modern age.

Developing media awareness alone may not be a sufficient solution to fake news. Due emphasis should also be placed on strengthening the role of the press and giving it a high level of constitutional protection, which would provide it with institutional protection not only after publication but also at the stage of information gathering. Support for the production of quality, objective journalistic content could be a means of making it more difficult to maintain the separation of parallel publics. The production and dissemination of quality content may also reduce the possibility of individual personalization, as it is more difficult to select balanced and credible information that is specifically tailored to one’s own opinion. It might also increase trust in the media and perhaps allow a move away from social discourse based solely on opinion and belief towards democratic debate based on facts. A healthy democracy presupposes public debate and free expression of opinion, and quality journalism, including online, is essential for this. The standards and practices of quality journalism are complex, numerous, and dynamic, but the overall goal behind the standards is the same: to produce accurate, balanced, and useful information. Both traditional and online media must operate according to these principles so that in the future they are not tools for social division but for healthy democracy. The implementation of these principles would help to strengthen the role of traditional journalism and the press and restore trust in them, thereby reducing the number of citizens who give credence to fake news. 


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.

János Tamás PAPP: Media Literacy: What Is It and How Can It Do More Harm than Good in the Fight against Disinformation? (Part I.)

The phenomenon of disinformation does not need to be explained to anyone. Almost everyone is familiar with the phenomenon, has heard and understood the term “fake news” in English or translated into their own language, and has heard about the amplifying effects of social media sites and the shortcomings of traditional media in this area. The topic has been in the crosshairs of academic research for almost a decade, so unsurprisingly many have been looking for potential solutions to this problem. Some see legal regulation as a solution, others fact-checking, but both solutions create many additional problems. Media awareness is the most frequently mentioned “silver bullet”, but this solution is also not as simple as one might first think.

Fact-checking serves as a crucial tool in the fight against disinformation by providing an evidence-based counter-narrative to misleading or false information. When individuals encounter claims or news stories, fact-checking offers a means to verify the authenticity and accuracy of those claims. Fact-checkers help a source of news or information maintain credibility and integrity. This is essential in a digital age where information spreads rapidly and can have significant real-world consequences. However, the effectiveness of fact-checking in combating disinformation is subject to certain limitations. For one, cognitive biases, such as the confirmation bias, can lead individuals to resist or dismiss fact-checked information that contradicts their pre-existing beliefs. Furthermore, the sheer volume of disinformation circulating on the internet can make it challenging for fact-checkers to address every false claim. Additionally, fact-checking might sometimes inadvertently amplify the reach of false information by drawing attention to it. Lastly, in highly polarized societies, fact-checking organizations can be perceived as biased or partisan, leading some individuals to distrust their findings. Concerns have been raised about the social networking sites’ own fact-checking initiatives, pointing to the question of who controls the fact-checkers. Research has shown that there is surprisingly little overlap between the results of different fact-checking sites, and the fact-checkers themselves may be somewhat biased. Thus, fact-checking sites alone cannot be relied upon.

The diversity of definitions of fake news alone shows the complexity of the issues involved. Beyond the obvious cases, it is very difficult to clearly define the scope of fake news, especially in political discourse. The problem of fake news is not simply a question of whether it is permissible to lie in democratic debates. It is a much more complex issue, as it involves examining the political, legal, commercial, and social aspects of the issue. Social networking sites are undoubtedly effective amplifiers of the spread of fake news and it is clear that platform operators have a serious responsibility to curb them. However, what action we expect them to take is far from clear. In fact, if we encourage them to fight fake news with more effective filtering and monitoring, we immediately run into several problems, because on the one hand, we are giving the platforms a legitimate opportunity to shape the democratic public, and on the other hand, we are leaving it to them to decide what they should filter, that is, to decide what is fake news and what is not. Conversely, this means that we also leave it up to them to decide what is the truth. 

The constitutional framework of freedom of speech does not in itself prohibit false speech. Untrue statements of fact are a necessary part of public discourse and as such cannot be excluded from the scope of freedom of expression, so their restriction, certainly in the context of public expression, must be judged by strict standards. It is not a difficult task to leave it to the operators of social networking sites alone. However, the legislative resolution of the issue alone cannot provide a fully satisfactory solution, since not only does the private law relationship governing the operation of the platforms and the possibilities for action raise questions, but it is also very difficult to draft legislation against fake news so carefully that it does not even inadvertently sanction a form of constitutionally protected speech.

The fight against mass manipulation could be more effective if it focused on the means and methods of manipulation rather than on misleading content, thus avoiding the major problem of content-based restrictions, namely who decides what is factual truth. In these forms of action, it is therefore not a matter of the degree of probative value of the content challenged, but of whether or not, for example, a company has run thousands of fake profiles on the site. It also follows that an action focusing on this aspect is much less likely to result in the silencing of certain views or opinions.  

So legislating on this issue is a very complicated and difficult process, and none of the legislative initiatives known so far can be considered a perfect solution, because they are either overbroad and restrict constitutionally protected speech, or they prohibit a very narrow category of disinformation, and thus cannot be said to be effective.

The most frequently cited tool (almost as a silver bullet) to combat disinformation is to develop media literacy, preferably at an early age. In recent years, media literacy has emerged as a beacon of hope in the fight against disinformation. Given the relative difficulty of defining the phenomenon of fake news itself, media literacy is considered by most to be the most effective means of counteracting the negative effects of disinformation on society. The premise is simple: arm people with the tools to critically evaluate the information they encounter, and they’ll be better equipped to differentiate fact from fiction. However, like any powerful tool, media literacy has potential unintended consequences. Understanding these is crucial to ensure that our best efforts to combat misinformation don’t inadvertently fuel the problem.

But what do we mean by media literacy?

Media literacy is an umbrella term that encapsulates several intertwined competencies. At its core, media literacy goes beyond the ability to merely access information; it delves into the understanding, analysis, evaluation, and creation of messages we receive and transmit through various media forms. The Center for Media Literacy defines media literacy as “a framework to access, analyze, evaluate, create and participate with messages in a variety of forms — from print to video to the Internet.” CML also notes that media literacy builds an understanding of the role of media in society as well as essential skills of inquiry and self-expression necessary for citizens of a democracy. Cynthia Vinney boils this down to „the ability to apply critical thinking skills to the messages, signs, and symbols transmitted through mass media”.

Media Literacy Now describes it as the ability to „Decode media messages (including the systems in which they exist); Assess the influence of those messages on thoughts, feelings, and behaviors; and Create media thoughtfully and conscientiously. According to Matthew Lynch, Media literacy includes seven core skills. The seven core skills encompass: 1) Inquiry, allowing critical questioning of information’s validity and biases; 2) Search and Research, facilitating differentiation between fact and fiction; 3) Critical Thinking, aiding in accurate interpretation of media messages; 4) Analysis, decoding media message constructions; 5) Evaluation, assessing the credibility of diverse media forms; 6) Ethics and Responsibility, promoting safe, ethical online behavior; and 7) Reflection and Self-Assessment, offering introspection on media’s societal impact and one’s contributions. Together, these skills foster informed and responsible online interactions while encouraging positive media contributions.

Taking the different conceptual elements together, it can be stated that first and foremost media literacy relates to the ability to critically analyze the flood of information we encounter daily through any means. Whether it’s a news report, a social media post, an advertisement, or a podcast, media-literate individuals can discern the underlying purposes, biases, and contexts of these messages. They can sift through layers of content, distinguishing facts from opinions, recognizing the techniques used to convey particular sentiments, and pinpointing potential areas of manipulation or misinformation.

The ability to analyze effectively is intricately linked to comprehending the influence of media messages on our views of reality. A person who possesses media literacy acknowledges that each media message is a deliberately crafted portrayal, subject to the influence of cultural, social, economic, and political elements. This entails recognizing that media functions as more than a passive reflection of reality, but rather as a tool that selectively filters, highlights, and occasionally distorts events, individuals, and ideas.

Nevertheless, media literacy encompasses more than a mere collection of abilities; it embodies a particular way of thinking. It fosters a sense of inquisitiveness, motivating individuals to inquire about the phenomena in their surroundings and the knowledge they acquire. The promotion of a more profound comprehension of varied perspectives provided in the media enhances the cultivation of empathy. Moreover, it promotes the significance of active engagement in civic affairs, placing emphasis on the influence of media in moulding public sentiment and governmental decisions.

Moreover, in this digital era, the boundaries between consumers and producers have become increasingly indistinct. Consequently, media literacy plays a crucial role in ensuring that individuals do not merely passively consume information, but actively engage as responsible participants within the media environment. Media literacy entails the ability to discern and acknowledge the significant impact and sway of media in forming views, cultural conventions, and even individual convictions. This statement underscores the significance of engaging in critical inquiry, analysis, and introspection when consuming media content, rather than passively absorbing it without scrutiny.

The significance of media literacy extends beyond the realms of news and entertainment. This topic encompasses multiple dimensions of human existence, encompassing self-perception, interpersonal understanding, and the construction of meaning in our surrounding environment. Media literacy not only aids individuals in identifying misinformation, but it also facilitates a more profound comprehension of the intricacies involved in the construction of information. This includes an examination of the strategies employed to express specific perspectives, as well as an exploration of the underlying objectives or prejudices inherent in a given piece of content. By cultivating a profound comprehension of media, individuals are endowed with enhanced capabilities to actively engage in democratic processes, partake in constructive discussions, and make well-informed judgments in both their personal and public spheres. Media literacy fosters a heightened level of criticality and thoughtfulness in individuals’ engagement with media, hence promoting active and informed citizenship rather than passive consumption.

Overall, media literacy is therefore a very useful and necessary skill that all citizens should have in order to be able to decode information and messages through the media. However, too much emphasis on media literacy can do more harm than good. The reasons for this and its essence will be discussed in the second part of this blog post.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.

János Tamás PAPP: New Winds Blowing in the Skies of Online Platform Regulation

Facebook and Instagram, owned by Meta, have been temporarily banned from tracking online user activity for ad targeting in Norway. The ban, effective from August, is the result of an order from the Norwegian Data Protection Authority due to Meta’s opaque and intrusive ad practices. During the three-month ban, Meta can show personalized ads only using information provided by users themselves. Non-compliance could lead to daily fines of 1 million Norwegian Krone (€89,500). This ban may be lifted if Meta finds a legal method for data processing that allows user opt-out. This kind of new sanction can be seen as quite novel and is certainly a welcome result of the growing regulatory effort. 

Legal regulation tends to lag a few steps behind changes in life and rarely anticipates them, and this is particularly true in the rapidly changing media field. Over the last decade, three major US companies, Facebook, Google, and Twitter, have become the most dominant platforms for online discourse.  This process has in effect privatised the social spaces available on the internet and the rules governing what is allowed on these platforms. The popularity of these platforms has increased in proportion to the responsibility and influence of their operators. Platforms, while undoubtedly broadening the scope for individual expression, also distort the public sphere and fundamentally redraw the structure of the public sphere, with a decisive impact on the evolution of social dialogue.

Be that as it may, however hard regulation tries, it has not yet found a way to curb this growing influence. There are many jurisdictional and private international law issues raised by this phenomenon, but the most important is the question of what sanctions can actually be used to curb these platforms. Of course, financial fines are the best way to express the kind of punishment we can impose on similar companies, but in many cases, this cannot always be effective, as we have seen in Australia, for example. 

In 2021, Australia sought to legislate the situation of platforms and the journalistic content they lavishly offer, and to establish a framework for cooperation between platforms and the various news media that would reward news media appropriately in return for the monetization of news by platforms. However, Facebook, which opposed the decision, tried to put pressure on the Australian government by blocking news sharing on its platform in Australia.  The move, which was seen by many as a form of blackmail, was ultimately successful, as Australia has amended the law on certain points, so that in the future it will remain up to social platforms and content providers to decide what exactly constitutes news and how much they can charge for it, rather than the law.

The Norwegian decision finally represents a different approach to sanctions and tries to hit platforms where it really hurts them. With such significant financial revenues, fines are often laughed off, but such limits are a clear step forward in terms of regulatory attitude. Of course, we will only be able to judge the effectiveness of the decision in hindsight, but it is certainly welcomed that the regulator is looking for new and creative ways to stipulate global platforms.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. Where he has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.

Mónika MERCZ: Unlocking the key to safer AI – the necessary data criteria in light of the draft AIA

Nowadays we hear numerous conversations regarding Artificial Intelligence, with a specific focus on the draft AI Act of the European Union. However, when we discuss this issue it is not just the technology itself which presents challenges. To my mind setting up several categories of AI technologies based on their risk factor – as is in the current form of the aforementioned Act – simply not enough. The central problem is left open and needs further investigation: namely, the quality of data used to train deep learning algorithms and to lead programmers to a high level of safety, transparency and accuracy when it comes to creating an AI that is appropriate in relation to the draft AIA.

This aspect of AI is crucial for the goals of the digital plan on the future of AI (and dare I say, the future of the EU as well, which seems to be deeply intertwined with technological advances). Companies would also be wise to take these new requirements into account in their plans, as OpenAI, the creator of the infamous ChatGPT has already come under fire for their use of scraped data from the web to train their chatbot. This violates the rights of millions of internet users, whose data was stolen, from which the company made unbelievable amounts of money, while the data subjects were left without any compensation – or choice on how their data was used. Of course, when it comes to big data, there will always be concerns – but the EU is seemingly trying its best to tackle them.

The EU aims to regulate these huge companies and try to gain some influence to become a leader in AI, but from a different approach compared to countries such as the US and China. With the Act on Artificial Intelligence specifically, the focus must be heavily placed on data quality, because that is the key to making sure that an AI is indeed safe – that it was trained in a manner which not only did not violate the rights of data subjects, but also to ensure that the technology we will slowly use for every aspect of our lives is indeed trustworthy.

But what is data quality? It means measuring how well a dataset meets criteria for accuracy, completeness, validity, consistency, uniqueness, timeliness, and fitness for purpose, and it is critical to all data governance initiatives within an organization. It must be investigated how reliable a particular set of data is and whether or not it will be good enough for a user to employ in decision-making. The reason behind its crucial nature is not just AI governance and the goals of the EU – poor data quality also costs organizations an average of USD 12.9 million each year and over the long term, poor quality data increases the complexity of data ecosystems and leads to poor decision making. As companies integrate artificial intelligence and automation technologies into their workflows, the effectiveness of these tools will depend largely on high-quality data. Therefore, companies must improve their data quality if they wish to be successful in the EU market in the future.

The draft AI Act’s (44) has the following requirements with regard to this issue: “High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the rights of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems.

This particular provision explains what needs to be done in order to satisfy the legal requirements when it comes to AI systems employed in companies – and it will mean a lot of work for them, even possibly blocking their competitiveness with the introduction of such high requirements. I firmly believe that data governance issues will be crucial, especially while companies attempt to comply with the new legislation. Article 10 of the draft AI Act elaborates on this issue in 2. when it states that “Training, validation and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern in particular, (a)the relevant design choices; (b)data collection; (c)relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation; (d)the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent; (e)a prior assessment of the availability, quantity and suitability of the data sets that are needed; (f)examination in view of possible biases; (g)the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed.” Additionally, the Act states that “Training, validation and testing data sets shall be relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used.” (Draft AI Act Article 10. 3.) These are indeed high expectations from companies, and only time will tell whether the future supervisory authorities shall take these requirements seriously or will be lenient.

In addition to the ever-rising significance of data governance, a few other key concepts should be noted when we discuss this issue. While I mentioned that data quality is a broader category of criteria that organizations use to evaluate their data for accuracy, completeness, validity, consistency, uniqueness, timeliness, and fitness for purpose, data integrity focuses on only accuracy, consistency, and completeness and implementing safeguards to prevent against data corruption by malicious actors. Data profiling, on the other hand, focuses on the process of reviewing and cleansing data to maintain data quality standards within an organization. This can also encompass the technology that supports these processes.

These processes will hopefully contribute to the accurate processing of data, reliable decision-making and overall a lower risk of using AI not just for business purposes, but overall. It will also be particularly intriguing to see how the GDPR’s value increases in the coming years, seeing how datasets are the backbone of AI technologies. Data quality to my mind should be put first especially because due to the blackbox of AI technologies we actually have no insight into how AI develops its goals. Models sometimes pursue goals their designers didn’t intend, which creates an alignment problem – where advanced deep learning models could pursue dangerous goals. While there are currently several ideas on how this problem could be solved, the easiest and most reliable thing we can do to avoid misaligned goals is to train the models on a data set of high quality. It might not solve all problems, but it would undoubtedly be beneficial to reduce risks associated with both the problems raised in the draft AI Act and those introduced by scientists.

The draft of the AI Act has several principles that we can interpret in favor of this goal, including the aim to minimise the risk of algorithmic discrimination, in particular in relation to the design and the quality of data sets used for the development of AI systems complemented with obligations for testing, risk management, documentation and human oversight throughout the AI systems’ lifecycle. (Explanatory memorandum, 1.2.) Because of the requirement of proportionality, for high-risk AI systems, the requirements of high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI. (Draft AI Act 2. Legal basis, subsidiarity and proportionality 2.3.) How well these regulations will be implemented in practice remains to be seen – but seeing that humanity’s safety is at stake, I am hopeful that the long-term risks might be mitigated if most countries slowly start to regulate AI as well.

Ultimately, I must say that categorising AI on a risk-based approach was necessary to avoid things like digital dictatorship and the complete erasure of privacy. However, as this technology could very easily have disastrous consequences for humanity, data quality and other high expectations that come with enforcing the Act must come first – even at the cost of competitiveness. 


Mónika MERCZ, JD, specialized in English legal translation, Professional Coordinator at the Public Law Center of Mathias Corvinus Collegium Foundation while completing a PhD in Law and Political Sciences at the Károli Gáspár University of the Reformed Church in Budapest, Hungary. She is an editor of Constitutional Discourse. Mónika’s past and present research focuses on constitutional identity in EU member states, data protection aspects of DNA testing, environment protection, children’s rights and Artificial Intelligence.

Email: editor@condiscourse.com

Dorina BOSITS: The Road Towards the Era of Digitalization: The Evolution of European Regulation on Freedom of Expression

The purpose of this essay is to provide a holistic view on some of the milestones of the evolution of the international regulation on the freedom of expression in Europe, introducing the Universal Declaration of Human Rights, the Charter of Fundamental Rights of the European Union, the Digital Services Act and touches upon the AI Act of the European Union.

The starting point of freedom rights in Europe is considered to be the French Revolution’s “Declaration of the Rights of Man and of the Citizen” from 1789. However, this document has presented solely a preliminary content compared to the modern fundamental human rights after the Second World War[1]. After the war, demand for peace intensified and basic principles arose which could not only contribute to the self-determination of a person but also represent the interests of the public on a wider scale[2].

Only three years after the end of the war, the Universal Declaration of Human Rights (UDHR) of the United Nations’ General Assembly in 1948 in Paris was proclaimed, which is seen as a revolutionary document and a milestone in history. Since then, several documents have been accepted, and almost all documents handle the UDHR as a sample document, a source of inspiration for the further development of human rights[3]. Article 19 of the UDHR declares the freedom of opinion and expression which right is defined as “the freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”[4] As shown, it expands the right to not only the expression of one’s opinion but also the access to information, as it shall be available and accessible through any kind of media[5]. In the Preamble, freedom of speech is highlighted: one shall be allowed to express his or her opinion through speech without having to fear being negatively discriminated against after expressing that opinion.

Although this right could appear to be unrestricted, the second section of Article 29 of the UDHR states that freedom can and shall be limited – but only for the protection of the rights of others, and these constraints shall be proportionate[6]. As mentioned above, this international proclamation only outlined the fundamental principles of human rights and provided merely broad definitions. However, the aspirational, generalist view[7] of the document has led to the need for new, more detailed, and more precisely formulated acts in the modern age, where technological advancements changed the landscape of legislation.

With the positive development of European cooperation in the second half of the twentieth century, in 1950, the European Convention on Human Rights (ECHR) was proclaimed, and up until today, 47 European countries have signed this agreement, from which 27 countries are also EU member states[8]. The ECHR in Article 10 highlights freedom of expression as a key fundamental right to protect and emphasizes that this protection does not only refer to “true” statements[9]. The ECHR serves as one of the first commitments of European countries to the UDHR, and today, when a country wishes to join the European Union, it is one of the many criteria that it needs to sign and ratify the ECHR[10].

By the last quarter of the twentieth century, the European Union (EU) was established as a sui generis institution[11]. After the treaty in Maastricht in 1992 and a small adjustment in the Treaty of Amsterdam in 1999 were proclaimed, the question of a European fundamental rights agreement arose between the states[12]. Later, this agreement became the Charter of Fundamental Rights of the European Union and the purpose of this Charter was to protect the citizens of the European Union and to clarify the rights historically established between individual Member States[13]. The Charter of the European Union was signed in 2000[14], and a consolidated version entered into force together with the Lisbon Treaty in 2012[15]. The Charter incorporated several principles from the UN’s Universal Declaration of Human Rights and expanded regulation and definitions in several articles.

Article 11 incorporates the same content as the UDHR; however, the article added a second section highlighting the freedom and diversity of the media. Limitations, as in the case of UDHR, are not listed in this article but only mentioned later in Article 52 about the interpretation and scope of these rights. This article is broken down into several sections and is placed in the provisional part where the way of application of the charter is detailed. Compared to the first version of the Charter from 2000, the applications are more detailed in the 2012 version. Section 1 of Article 52 emphasizes that the essence of the rights shall be respected and applied with respect to proportionality. Any limitation to rights such as the freedom of expression can only be introduced when it is necessary and only in the interest of the public good while not contradicting other principles and rights. Article 21 further highlights the avoidance of discrimination in the case of any personal belief, political opinion, and other views. At the end of the document, in Article 54, the prohibition of abuse of these principles and rights is stated. No right or activity is allowed to be used for the destruction of any other right or principle[16].

The European Court of Human Rights (ECtHR) has developed a 6-step stress-test that states different discretionary criteria according to which a limiting decision can be made. The test provides international companies with step-by-step guidance to monitor content on their platforms, so they can ensure alignment with the international regulation on fundamental rights, especially with freedom of expression. According to this, companies shall be observed from different angles case by case to form a decision whether the content can be limited or not[17]. The ECtHR was established by the ECHR in 1950 in Strasbourg and works as an independent international court.[18] This 6-step-stress-test serves as a guideline next to the Charter but is independent from the document itself. However, since all EU Member States must sign and ratify the ECHR, they accept the jurisdiction of the ECtHR as well as the consolidated version of the Charter of Fundamental Rights in the Lisbon Treaty[19].

In the past years, the need for new digital regulations arose within the EU. After two years of work, in 2022, the Digital Services Act (DSA)[20] was introduced together with the Digital Markets Act (DMA)[21] that regulates the online markets’ competition. These two acts intend to regulate the online space extensively and tries to harmonize the users’ rights and obligations within the EU[22]. The need for comprehensive legislation is proven by the online space’s lack of security and protection. However, the content of the new regulation shows that the previous EU documents dealing with fundamental rights still serve as a dogmatic background for the new acts. Therefore, the DSA and the DMA shall be regarded as more detailed and specified acts which are complementary to the previously stated, more dogmatic documents and not as replacements.

Besides unification of regulation on the digital space in Europe, one of the fundamental purposes of the DSA is to strengthen freedom of expression. Most international regulations put emphasis on the limitations in alignment with the fundamental rights. The DSA, however, strives to protect freedom of expression and the possibility to spread one’s opinion freely[23]. The DSA requires large search engines and online platforms to make their algorithmic methods understandable for the users, however, Decarolis and Li warns that regulators should remember that such online platforms can easily adapt to the new environment and come up with new ways to outmaneuver the applicable rules[24]. The document clearly refers to the Charter in its preamble and restates the most important fundamental principles and rights highlighting the importance of freedom of expression as well[25]. The Preamble mentions in 14 different sections freedom of expression which indicates that it is a general guiding principle for the document. In other four instances, the main articles discuss the obligations of online platforms to ensure freedom of expression[26].

As online platforms are getting too big to be allowed to self-regulate, as they can mean a potential threat to democracy, the DSA initiative introduced a balanced system, where the first layer to turn to is the company’s self-regulations which serve as buffers against abusive governmental control and strive for better enforceability within the organization. The DSA, as a secondary layer, prescribes monitoring mechanisms and measures that try to protect users to express their opinions freely. However, it states that a self-regulatory environment to some extent is necessary to prevent potential government overreach[27]. The real breakthrough of the DSA can be found in its fine system, where the penalty payments became maximized in case of failure to comply with the regulation. According to Article 52, the imposable fines can take up to 6% of the annual worldwide turnover of the intermediary in case of not fulfilling its obligations[28]. The Digital Services Act provides Member States and international online platforms with an extensive set of regulations that they must obey but enforceability in case of global media giants and the clear borders of freedom of expression remain blurry and are not completely drawn up.

Freedom of expression and its potential limitations have been debated since ancient times, but after the end of the second World War, the conversation has been revived and regulation such as the UDHR and the Charter started to determine the main principles of freedom of expression. In the 21st century, the need for a separated regulation for digital platforms increased and with that, the DSA and the DMA were created. Another challenge, one has to face in the modern world is the increased usage of Artificial Intelligence (AI). Thus, the need for regulation arose in that area as well, which also influence the limits of freedom of expression. With the upcoming negotiation on an AI Act, regulation on fundamental human rights is about to become more detailed and specialized in the digital space.


[1] Lassányi Tamás: A véleménynyilvánítás szabadsága az Interneten. In. Az információs társadalom felé. Tanulmányok és hozzászólások. In. Replika Kör, 2001. Budapest, p.130.

[2] Lassányi Tamás: A véleménynyilvánítás szabadsága az Interneten. In. Az információs társadalom felé. Tanulmányok és hozzászólások. In. Replika Kör, 2001. Budapest, p.132.

[3] United Nations: Universal Declaration of Human Rights. 1948. https://www.un.org/en/about-us/universal-declaration-of-human-rights Accessed on 11 July 2023.

[4] United Nations: Universal Declaration of Human Rights. – Article 19. 1948. https://www.un.org/en/about-us/universal-declaration-of-human-rights Accessed on 18 July 2023.

[5] United Nations: Universal Declaration of Human Rights. – Article 19. 1948. https://www.un.org/en/about-us/universal-declaration-of-human-rights Accessed on 11 July 2023.

[6] United Nations: Universal Declaration of Human Rights – Article 29 Section 2. 1948. https://www.un.org/en/about-us/universal-declaration-of-human-rights Accessed on 11 July 2023.

[7] Özler, Ş. İlgü: The Universal Declaration of Human Rights at Seventy: Progress and Challenges. In Ethics & International Affairs. Winter 2018 (32.4). https://www.ethicsandinternationalaffairs.org/journal/the-universal-declaration-of-human-rights-at-seventy-progress-and-challenges#:~:text=The%20first%20and%20most%20basic,framework%20for%20actually%20achieving%20them. Accessed on 18 July 2023.

[8] European Union: European Convention on Human Rights. https://eur-lex.europa.eu/EN/legal-content/glossary/european-convention-on-human-rights-echr.html#:~:text=Signed%20in%201950%20by%20the,are%20members%20of%20the%20EU. Accessed on 19 July 2023.

[9] Council of Europe: The European Convention on Human Rights – Article 10. https://www.coe.int/en/web/human-rights-convention/expression Accessed on 19 July 2023.

[10] Council of Europe: The European Convention on Human Rights – The Convention in 1950. https://www.coe.int/en/web/human-rights-convention/the-convention-in-1950 Accessed on 19 July 2023.

[11] Milana, Marcella: European Union. 2023. In. International Encyclopedia of Education (Fourth Edition). Published on 18 November 2022., p.494. https://doi.org/10.1016/B978-0-12-818630-5.01057-5

[12] Hobe, Stephan: Will the European constitution lead to a European super-state? In. Futures. Vol. 38. No. 2. Published in March 2006. p.171-172. https://doi.org/10.1016/j.futures.2005.04.014

[13] Equality and Human Rights Commission: What is the Charter of Fundamental Rights of the European Union? Updated on 2 August 2021. https://www.equalityhumanrights.com/en/what-are-human-rights/how-are-your-rights-protected/what-charter-fundamental-rights-european-union Accessed on 12 July 2023

[14] European Parliament: Charter of Fundamental Rights of the European Union. 2000. In. Official Journal of the European Communities. https://www.europarl.europa.eu/charter/pdf/text_en.pdf Accessed on 12 July 2023.

[15] European Parliament: Consolidated Version of the Treaty on the Functioning of the European Union. 2012. In. Official Journal of the European Communities.

[16] European Parliament: Consolidated Version of the Treaty on the Functioning of the European Union. 2012. In. Official Journal of the European Communities.

[17] Gosztonyi, Gergely: Some human and technical aspects of online content regulation. In. Journal of Liberty and International Affairs, Bitola, Institute for Research and European Studies, 2021, Vol.7. No. 3. p.159. https://www.ajk.elte.hu/dstore/document/3077/ELTE_AJK_Annales_2019%2007%20Gosztonyi.pdf

[17] European Commission: The Digital Services Act: ensuring a safe and accountable online environment. 2022. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act-ensuring-safe-and-accountable-online-environment_en Accessed on 14 July 2023.

[18] European Court of Human Rights: Questions and Answers. n.d. P.3. https://www.echr.coe.int/documents/d/echr/Questions_Answers_ENG Accessed on 19 July 2023.

[19] European Union: European Convention on Human Rights. https://eur-lex.europa.eu/EN/legal-content/glossary/european-convention-on-human-rights-echr.html#:~:text=Signed%20in%201950%20by%20the,are%20members%20of%20the%20EU. Accessed on 19 July 2023.

[20] European Union: Regulation (EU) 2022/2065 of the European Parliament and of the Council on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act). 2022.

[21] European Union: Regulation (EU) 2022/1925 of the European Parliament and of the Council on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1397 and (EU) 2020/1828 (Digital Markets Act). 2022.

[22] European Commission: The Digital Services Act: ensuring a safe and accountable online environment. 2022. In. European Commission. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act-ensuring-safe-and-accountable-online-environment_en Accessed on 16 July 2023.

[23] European Publishers Council: The Digital Services Act must safeguard freedom of expression online. 2022. https://www.epceurope.eu/post/the-digital-services-act-must-safeguard-freedom-of-expression-online#:~:text=3%20min-,The%20Digital%20Services%20Act%20must%20safeguard%20freedom%20of%20expression%20online,between%20citizens%20online%20is%20curtailed. Accessed on 15 July 2023.

[24] Decarolis, Francesco, and Muxin Li: Regulating Online Search in the EU: From the Android Case to the Digital Markets Act and Digital Services Act. 2023.p.2-3. https://doi.org/10.1016/j.ijindorg.2023.102983

[25] European Union: Regulation (EU) 2022/2065 of the European Parliament and of the Council on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) – Preamble (3). 2022.

[26] European Union: Regulation (EU) 2022/2065 of the European Parliament and of the Council on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) – Article 14, Article 34, Article 47, Article 91. 2022.

[27] Gamito Cantero, Marta: The European Media Freedom Act (EMFA) as Meta-Regulation. 2023. In. Computer Law & Security Review. p.19. https://doi.org/10.1016/j.clsr.2023.105799

[28] European Union: Regulation (EU) 2022/2065 of the European Parliament and of the Council on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) – Article 52. 2022.


Dorina BOSITS is a law student at the Széchenyi István University of Győr, Hungary, and an international finance and accounting graduate of the University of Applied Sciences of Wiener Neustadt, Austria. The main area of her research includes freedom of speech, digitalization, data protection, and financial law. She is a student at the Law School of MCC and a member of ELSA Győr.

János Tamás PAPP: Innovation vs. regulation: how the EU could hinder innovation in the online tech industry

Despite Europe having a fairly equivalent gross domestic product (GDP), population, and talent pool of educated people, large internet platforms (mostly U.S.-based digital businesses) have led the way in global tech innovation from Web 2.0 to artificial intelligence. The comparison between the European Union and the United States in terms of technology regulation illuminates two markedly different approaches, each with unique impacts on innovation. In the EU, a stricter regulatory environment aims to balance the power of “gatekeepers” and protect user privacy, but has raised concerns about potentially stifling innovation. Conversely, the U.S. has traditionally taken a more laissez-faire approach, fostering a robust climate of innovation while inviting criticism regarding unchecked power of tech giants and privacy concerns. The EU’s approach appears to be, in sum, “If you can’t innovate, regulate.”

One of the most prominent examples is the General Data Protection Regulation (GDPR), that came into force in 2018 to protect personal data and privacy rights of EU citizens. While the GDPR is indeed essential in addressing privacy concerns in the digital age, its stringent conditions for data handling and processing may impede tech companies’ ability to innovate. The GDPR, thankfully makes users’ data collection and analysis harder, but it also disadvantaged small firms and startups against larger companies. For example, companies may face restrictions in data mining, machine learning, and artificial intelligence development, areas where massive amounts of data are required for algorithm training and development. It led to a significant increase in the concentration of the web technology vendor market and decreased investments in European tech startups. Moreover, the enforcement of GDPR may inadvertently favor large tech corporations over startups. Compliance with the GDPR can be costly and time-consuming, thereby placing a significant burden on smaller firms with limited resources. This discrepancy might discourage innovative ideas and technologies from emerging, further contributing to the existing digital monopolies.

Other important pieces of legislation that could impede tech innovation are the recently adopted Digital Services Act (DSA) and Digital Markets Act (DMA). These regulations aim to prevent anti-competitive behavior by large tech companies and protect consumers from harmful content online. However, scholars argue that the strict rules may hinder innovation by discouraging companies from exploring new technologies and business models for fear of non-compliance. Both regulations are ex ante regulations, which in themselves can be a barrier to effective innovation. Ex post regulation is often implemented after a market failure or distortion. It generally happens once enough information is known and enough proof of harmful effects has been provided. However, sometimes, authorities try to spot issues before they arise. Ex ante regulation is predictive and subject to the bias of the regulators. Ex ante regulation instructs market participants on what to do, while ex post regulation instructs them on what not to do.

The DMA, meant to limit the power of “gatekeepers” in the digital economy, has been criticized for its broad definition of a gatekeeper and its potential to stifle innovation. It could inadvertently hinder service innovation, competition, and force companies to duplicate data infrastructures. It also raises concerns about limiting business model innovation and transformation, especially for traditional sectors adopting platform-based models. The regulatory goal of maintaining markets as open and competitive as feasible could be denoted by the terms like fairness (defined as equality of opportunity) and contestability (defined as lowering entry barriers on and around key platform services). When seen from that perspective, some argue for the contrary, stating that the DMA could be consistent with a modernized form of the ordo-liberal tradition that continues to guide much of EU competition law and in this regard, the DMA will not stifle innovation in Europe, but will instead work to increase its variety and ideally its level.

On the other side of the ocean, of course, they argue in defense of US companies, pointing out that the DSA include discriminatory clauses that are specifically directed at major U.S. platforms. For instance, the DSA originally classified 19 organizations as Very Large Online Platforms (VLOPs) or Very Large Online Search Engines (VLOSEs), of which 16 are privately held companies with headquarters in the United States, 2 are located in China, and just 1 is located in the European Union. A recent research by CSIS estimated the considerable economic consequences that U.S. and EU businesses would incur as a result of the DMA, DSA, and other new digital laws. The expected cost of compliance for U.S. service providers ranges from $22 to $50 billion. Additionally, according to CSIS study, there might be a 2% decline in US exports of services worldwide.

Andrew McAfee of MIT highlights the risk that the European Union’s proposed AI regulation would also stifle innovation. The regulations, which classify systems such as Duolingo’s English Test as high-risk due to their use of AI for personalization and grading, would impose extensive requirements on high-risk AI systems, even before initial testing. McAfee argues that these obligations will deter AI-using entrepreneurs and investors from focusing on high-risk applications, thereby slowing technological innovation in critical areas like education, hiring, and crime prevention. McAfee suggests that this upfront planning and oversight approach by the EU could lead to slower progress and growth, referring to the impact of the General Data Protection Regulation on venture investment and Google’s market share. He believes this approach may be contributing to the EU’s lag in the “second machine age”. He contrasts this with the “permission-less innovation” approach, which encourages a broader field of potential innovators, including those with less resources. The planned law has also been resisted by dozens of the continent’s leading business figures, who fear that it might harm the bloc’s competitiveness and cause an exodus of investment. They said in an open letter that the proposed law “In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” They contend that the proposed regulations go too far, particularly when it comes to regulating generative AI and foundation models, the technology that powers well-known platforms like ChatGPT.

Another key element to consider is the regulatory uncertainty. Tech companies operate in an environment of constant change and disruption, which requires flexible and adaptable regulation. However, EU’s legislative process is relatively slow, and regulations often lag behind the pace of technological innovation. This delay and uncertainty can make it challenging for tech companies to plan for the future and make substantial investments in R&D.

More and more scholars argue that the regulatory focus should be on specific anti-competitive practices rather than on large platform operators. They suggest that preserving business model innovation should be a top priority and that regulations should focus on why ecosystems are competitive, not who is winning. Other suggestions include fostering market contestability in adjacent segments, and implementing a decentralized, data-driven accountability regulatory system. Some fear that without effective calibration, the DMA could distort platform industries and disadvantage small businesses and consumers who rely on these platforms. An alternate approach to regulatory design is needed, that builds general principles and leaves the interpretation of those principles to antitrust units. This would ensure that regulatory institutions can learn and adapt to the dynamic digital environment

In conclusion, while the EU’s legislative efforts play a crucial role in shaping a safer and fairer digital space, there are concerns about the potential adverse effects on technological innovation. There is a delicate balance to strike between the need for regulation and the freedom to innovate. Striking this balance will require ongoing dialogue and cooperation among policy-makers, tech companies, and the wider society to ensure the development of fair and innovation-friendly regulations.


János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.