Vagelis PAPAKONSTANTINOU: The (New) Role of States in a ‘States-As-Platforms’ Approach

The forceful invasion of “online platforms” not only into our everyday lives but also into the EU legislator’s agenda, most visibly through the DSA and DMA regulatory initiatives, perhaps opened up another approach to state theory: what if states could also be viewed as platforms themselves? Within the current digital environment  online platforms are information structures that hold the role of information intermediaries, or even “gatekeepers”, among their users. What if a similar approach, that of an informational structure, was applied onto states as well? How would that affect their role under traditional state theory?

The ‘States-as-Platforms’ Approach

Under the current EU law approach, online platforms essentially “store and disseminate to the public information” (DSA, article 2). This broadly corresponds to the digital environment around us, accurately describing a service familiar to us all whereby an intermediary offers to the public an informational infrastructure (a “platform”) that stores data uploaded by a user and then, at the request of that same user, makes such data available to a wider audience, be it a closed circle of recipients or the whole wide world. In essence, the online platform is the necessary, medium to make this transaction possible.

Where do states fit in? Basically, states have held the role of information intermediaries for their citizens or subjects since the day any type of organised society emerged. Immediately at birth humans are vested with state-provided information: a name, as well as a specific nationality. Without these a person cannot exist. A nameless or stateless person is unthinkable in human societies. This information is subsequently further enriched within modern, bureaucratic states: education and employment, family status, property rights, taxation and social security are all information (co-)created by states and their citizens or subjects.

It is with regard to this information that the most important role of states as information brokers comes into play: states safely store and further disseminate it. This function is of paramount importance to individuals. To live their lives in any meaningful manner individuals need to have their basic personal data, first, safely stored for the rest of their lives and, second, transmittable in a validated format by their respective states. In essence, this is the most important and fundamental role of states taking precedence even from the provision of security. At the end of the day, provision of security is meaningless unless the state’s function as an information intermediary has been provided and remains in effect—that is, unless the state knows who to protect.

What Do Individuals Want?

If states are information brokers for their citizens or subjects what is the role of individuals? Are they simply passive actors, co-creating information within boundaries set by their respective states? Or do they assume a more active role? In essence, what does any individual really want?

Individuals want to maximise their information processing. This wish is shared by all, throughout human history. From the time our ancestors drew on caves’ walls and improved their food gathering skills to the Greco-Roman age, the Renaissance and the Industrial Revolution, humans basically always tried, and succeeded, to increase their processing of information, to maximise their informational footprint. Or in Van Doren’s wordsthe history of mankind is the history of the progress and development of human knowledge. Universal history […] is no other than an account of how mankind’s knowledge has grown and changed over the ages”.

At a personal level, if it is knowledge that one is after then information processing is the way of life that that person has chosen. Even a quiet life, however, would be unattainable if new information did not compensate for inevitable change around us. And, for those after wealth, what are riches other than access to more information? In essence, all of human life and human experience can be viewed as the sum of the information around us.

Similarly, man’s wish to maximise its information processing includes the need for security. Unless humans are and feel secure their information processing cannot be maximised. On the other hand, this is as far as the connection between this basic quest and human rights or politics goes: increase of information processing may assumedly be favoured in free and democratic states but this may not be necessarily so. Human history is therefore a long march not towards democracy, freedom, human rights or any other (worthy) purpose, but simply towards information maximization.

The Traditional Role of States Being Eroded by Online Platforms

Under traditional state theory states exist first and foremost for the provision of security to their citizens or subjects. As most famously formulated in Hobbes’ Leviathan, outside a sovereign state man’s life would be “nasty, brutish, and short” (Leviathan, XIII, 9). It is to avoid this that individuals, essentially under a social contract theory, decide to forego some of their freedoms and organise themselves into states. The politics that these states can form from that point on go into any direction, ranging from democracy to monarchy or oligarchy.

What is revealing, however, for the purposes of this analysis in Hobbes’ book is its frontispiece: In it, a giant crowned figure is seen emerging from the landscape, clutching a sword and a crosier beneath a quote from the Book of Job (Non est potestas Super Terram quae Comparetur ei / There is no power on earth to be compared to him). The torso and arms of the giant are composed of over three hundred persons all facing away from the viewer, (see the relevant Wikipedia text).

The giant is obviously the state, composed of its citizens or subjects. It provides security to them (this is after all Hobbes’ main argument and the book’s raison d être), however how is it able to do that? Tellingly, by staying above the landscape, by seeing (and knowing) all, by exercising total control over it.

Throughout human history information processing was state-exclusive. As seen, the only thing individuals basically want is to increase their processing of information. Nevertheless, from the ancient Iron Age Empires to Greek city-states, the Roman empire or medieval empires in the West and the East, this was done almost exclusively within states’ (or, empires’) borders. With a small exception (small circles of merchants, soldiers or priests who travelled around) any and all data processing by individuals was performed locally within their respective states: individuals created families, studied, worked and transacted within closed, physical borders. There was no way to transact cross-border without state intervention, and thus control, either in the form of physical border-crossing and relevant paperwork or import/export taxes or, even worse, mandatory state permits to even leave town. This was as much true in our far past as also recently until the early 1990s, when the internet emerged.

States were therefore able to provide security to their subjects or citizens because they controlled their information flows. They knew everything, from business transactions to personal relationships. They basically controlled the flow of money and people through control of the relevant information. They could impose internal order by using this information and could protect from external enemies by being able to mobilise resources (people and material) upon which they had total and complete control. Within a states-as-platforms context, they co-created the information with their citizens or subjects, but they retained total control over this information to themselves.

As explained in a recent MCC conference last November, online platforms have eroded the above model by removing exclusive control of information from the states’ reach. By now individuals transact over platforms by-passing mandatory state controls (borders, customs etc.) of the past. They study online and acquire certificates from organisations that are not necessarily nationally accredited or supervised. They create cross-national communities and exchange information or carry out common projects without any state involvement. They have direct access to information generated outside their countries’ borders, completely uncontrolled by their governments. States, as information brokers profiting from exclusivity in this role now face competition by platforms.

This fundamentally affects the frontispiece in Leviathan above. The artist has chosen all of the persons composing the giant to have no face towards the viewer, to face the state. This has changed by the emergence of online platforms: individuals now carry faces, and are looking outwards, to the whole wide world, that has suddenly been opened-up to each one of us, in an unprecedented twist in human history.

The New Role of States

If the generally accepted basic role of states as providers of security is being eroded by online platforms, what can their role be in the future? The answer lies perhaps within the context of their role as information intermediaries (a.k.a. platforms), taking also into account that what individuals really want is to maximise their information processing: states need to facilitate such information processing.

Enabling maximised information processing carries wide and varied consequences for modern states. Free citizens that are and feel secure within a rule of law environment are in a better position to increase their informational footprint. Informed and educated individuals are able to better process information than uneducated ones. Transparent and open institutions facilitate information processing whereas decision-making behind closed doors stands in its way. Similarly, information needs to be free, or at least, accessible under fair conditions to everybody. It also needs to remain secure, inaccessible to anybody without a legitimate interest to it. Informational self-determination is a by-product of informational maximisation. The list can go on almost indefinitely, assuming an informational approach to human life per se.

The above do not affect, at least directly, the primary role of states as security providers. Evidently, this task will (and needs to) remain a state monopoly. Same is the case with other state monopolies, such as market regulation. However, under a states-as-platforms lens new policy options are opened while older assumptions may need to be revisited. At the end of the day, under a “pursuit of happiness” point of view, if happiness ultimately equals increased information processing, then states need to, if not facilitate, then at least allow such processing to take place.


Vagelis Papakonstantinou is a professor at Vrije Universiteit Brussel (VUB) at LSTS (Law Science Technology and Society). His research focuses on personal data protection, both from an EU and an international perspective, with an emphasis on supervision, in particular Data Protection Authorities’ global cooperation. His other research topics include cybersecurity, digital personhood and software. He is also a registered attorney with the Athens and Brussels Bar Associations. Since 2016 he has been serving as a member (alternate) of the Hellenic Data Protection Authority, while previously served as a member of the Board of Directors of the Hellenic Copyright Organisation (2013-2016).

Mónika MERCZ: Privacy and Combatting Online Child Sexual Abuse – A Collision Course?

The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) adopted a Joint Opinion on the Proposal for a Regulation to prevent and combat child sexual abuse on the 29th of July, 2022. While this has not made huge waves in the public discourse, we must take a moment to discuss what this stance means for how we view data protection in relation to child protection, and specifically fighting against online child sexual abuse material (CSAM). The International Data Protection Day seems like a good occasion to contribute to this debate.

The Proposal’s aim was to impose obligations when it comes to detecting, reporting, removing and blocking known and new online CSAM. According to the Proposal, the EU Centre and Europol would work closely together, in order to transmit information regarding these types of crime. The EDPB and EDPS recommend that instead of giving direct access to data for the purposes of law enforcement, each case should be first assessed individually by entities in charge of applying safeguards intended to ensure that the data is processed lawfully. In order to mitigate the risk of data breaches, private operators and administrative or judicial authorities should decide if the processing is allowed.

While child sexual exploitation must be stopped, the EDPB stated that limitations to the rights to private life and data protection shall be upheld, thus only strictly necessary and proportionate information should be retained in these cases. The conditions for issuing a detection order for CSAM and child solicitation lack clarity and precision. This could unfortunately lead to generalised and indiscriminate scanning of content of virtually all types of electronic communications. But is our privacy’s safety truly worth the pain suffered by minors? Is it not already too late for our society to try to put privacy concerns first anyway? I believe that this issue is much more multifaceted than would seem at first glance.

There are additional concerns regarding the use of artificial intelligence to scan users’ communications, which could lead to erroneous conclusions. While human beings make mistakes too, the fact that AI is not properly regulated is a big issue. This fault in the system may potentially lead to several false accusations. EDPB and EDPS shared that in their opinion “encryption contributes in a fundamental way to the respect of private life and to the confidentiality of communications, freedom of expression, innovation and growth of the digital economy.” However, it must be noted that more than one million reports of CSAM happened in the European Union in 2020. The COVID-19 pandemic was undoubtedly a factor in the 64% rise in such reports in 2021 compared to the previous year. This is cause for concern, and should be addressed properly.

In light of these opposing views about the importance of individuals’ rights, I aim to find some semblance of balance. The real question is: how can we ensure that every child is protected from sexual exploitation, perpetrators are found and content is removed, while protecting ourselves from becoming completely transparent and vulnerable?

  1. Why should we fight against the online sexual exploitation of children?

First of all, I would like to point out how utterly vital it is to protect children from any form of physical, psychological or sexual abuse. Protecting children is not only a moral issue, but also the key to humanity’s future. I would like to provide some facts, underlined by mental health experts. We know that any form of child sexual exploitation has short-term effects including exhibiting regressive behavior; performance problems at school; and an unwillingness to participate in activities. Long-term effects include depression, anxiety-related behavior or anxiety, eating disorders, obesity, repression, sexual and relationship problems. These serious issues can affect people well into adulthood, culminating in a lower quality of life, thus enabling members of society to become less productive.

In addition to these serious psychological consequences, the fundamental rights of victims are infringed, such as the human rights to life, health, personal freedom and security, as well as their right not to be tortured or exposed to other inhuman, cruel or degrading treatment, as guaranteed by the UDHR and other international laws. In addition to the efforts made by countries that ratified the Convention on the Rights of the Child, I also must mention the United States Supreme Court decision and lower court decisions in United States v. Lanier. In this case we can see that in the US’s interpretation, sexual abuse violates a recognized right of bodily integrity as encompassed by the liberty interest protected by the 14th Amendment. Although this American finding dates back to 1997, it does not strip the statement from validity in our online world.

To speak about the legal framework governing the issue in my home country, Hungary, the Fundamental Law also protects the aforementioned rights. Article XV under “Freedom and Responsibility’” states that “(5) By means of separate measures, Hungary shall protect families, children, women, the elderly and those living with disabilities.” While this is an excellent level of protection, I would propose that we need to add the part “Hungary shall take measures to protect children from all forms of sexual exploitation”, or even if we do not add it into our constitution, we must make it a priority. Act XXXI. of 1997 on the Protection of Children and the Administration of Guardianship is simply not enough to help keep children safe against new forms of sexual abuse, in particular, online exploitation. With the dark web providing a place for abusers to hide behind, what options do we have to expose these predators and recover missing children?

A study explored a sample of 1,546 anonymous individuals who voluntarily responded to a survey when searching for child sexual abuse material on the dark web. 42% of the respondents said that they had sought direct contact with children through online platforms after viewing CSAM. 58% reported feeling afraid that viewing CSAM might lead to sexual acts with a child or adult. So we can see that the situation is indeed dire and needs a firm response on an EU level, or possibly even on a wider international level. Sadly, cooperation between countries with different legal systems is incredibly difficult, time-consuming and could also lead to violations of privacy as well as false accusations and unlawful arrests. This is where several of the concerns of EDPB and EDPS arose in addition to the data protection aspects mentioned before.

  1. Avoiding a Surveillance State

Having talked about the effects and frequency of child sexual abuse online, I have no doubt that the readers of this blog agree that drastic steps are needed to protect our most vulnerable. However, the issue is made difficult by the fear that data provided by honest people wishing to help catch predators could lead to data protection essentially losing its meaning. There are many dire consequences that could penetrate our lives if data protection were to metaphorically “fall”. It is enough to think about China’s Social Credit System and surveillance state, that is a prime example of what can happen if the members of society become transparent instead of the state. Uncontrolled access to anyone’s and everyone’s data under the guise of investigation into cases of online abuse could easily lead to surveillance capitalism getting stronger, our data becoming completely visible and privacy essentially ceasing to exist.

Right now, personal data is protected by several laws, namely the GDPR, and in Hungary, Act CXII of 2011 on the Right of Informational Self-Determination and on Freedom of Information. This law is upheld in particular through the work of the Hungarian National Authority for Data Protection and Freedom of Information. The Fundamental Law of Hungary also upholds the vital nature of data protection in its Article VI (3)[1] and (4)[2]. I advise our readers to take a look at the relevant legal framework themselves, but I shall focus on the pertinent data-protection aspects for the sake of this discussion.

There are several declarations by politicians and institutions alike that reinforce how essential this field of law is. This is of course especially true in the case of the European Union. As has been previously stated in one of our posts here on Constitutional Discourse, by Bianka Maksó and Lilla Nóra Kiss, the USA has a quite different approach. But can we justify letting children go through horrific trauma in order to protect our personal information? Which one takes precedence?

  1. A moral issue?

On the most basic level, we might believe that our information cannot be truly protected, so we might as well take a risk and let our data be scanned, if this is the price we must pay in order to protect others. But are we truly protecting anyone, if we are making every person on Earth more vulnerable to attacks in the process?

The Constitutional Court of Hungary has long employed the examination of necessity and proportionality in order to test which one of two fundamental rights need to be restricted if there is collision. I shall defer to their wisdom and aim to replicate their thought process in an incredibly simplified version – as is made necessary by the obvious limitations of a blog post. My wish is to hypothesize if we could justify infringement of data protection in the face of a child’s right to life/ development.

First of all, I shall examine if the restriction of our right to data protection is absolutely necessary. If the right of children not to suffer sexual exploitation online (which, again contains facets of their right to life, health, personal freedom and security, as well as their right not to be tortured or exposed to other inhuman, cruel or degrading treatment) can be upheld in any other – less intrusive but still proper – way other than giving up data protection, then restricting privacy is not necessary. While experts’ opinion leans towards the point of view that privacy must be upheld, I would like to respectfully try to see it from another side. Currently we are trying to implement measures to stop online child abuse in all its forms, but it yields few results. The issue is growing. Many claim that a form of cooperation between law enforcement, hackers, different countries and many other actors could lead to curbing this crime further. Could we ever completely stop it? Probably not. But could we uphold their right not to be tortured or exposed to other inhuman, cruel or degrading treatment and to a healthy development? Maybe.

I put forward the idea that at this point we have no other, more effective measure to stop online child sexual abuse other than restricting our own protection of personal data to a degree – child protection is a public interest and protecting our posterity also has constitutional value. Additionally, the Fundamental Law of Hungary contains in its Article I (3) that “(…) A fundamental right may only be restricted to allow the effective use of another fundamental right or to protect a constitutional value, to the extent absolutely necessary, proportionate to the objective pursued and with full respect for the essential content of that fundamental right.” As I have argued, child protection is undoubtedly of constitutional value and could warrant the restriction of data protection. On the other hand, the Constitutional Court of Hungary has established that privacy protection is also of constitutional value.[3]

As the second step of the test, based on my previous observations, I must wholeheartedly agree that data protection should only be restricted to the most indispensable extent. Because both of these issues are so intertwined and difficult to balance, we could have a new policy specifically for cases where CSAM is sought by looking into personal data. I firmly believe that this solution could be found, but it would require establishing new agencies that specifically deal with aspects of data protection when it comes to cases like this. The prevalence of this material on the Internet also makes it necessary for us to update laws which are about the relationship of privacy and recordings of CSAM.

I cannot think of a better alternative right now than a slight restriction of privacy, even with the added risks. The way things are progressing, with the added weight of the global pandemic, inflation, war and climate change will lead to more children being sold and used for gain on online platforms, which are often untraceable. Are we willing to leave them to their fate in the name of protecting society as a whole from possibly becoming more totalitarian? Are we on our way to losing privacy anyway?

These are all questions for the future generations of thinkers, who may just develop newer technologies and safer practices, which make balancing these two sides of human rights possible. Until then, I kindly advise everyone reading this article to think through the possible consequences of taking action in either direction. Hopefully, on the International Day of Data Protection, I could gauge your interest in a discussion which could lead to concrete answers and new policies all across the EU in the future.

Mónika MERCZ lawyer, is a PhD student in the Doctoral School of Law and Political Sciences at Károli Gáspár University of the Reformed Church, Budapest. A graduate of the University of Miskolc and former Secretary General of ELSA Miskolc, she currently works as a Professional Coordinator at Public Law Center of Mathias Corvinus Collegium. She is a Member of the Editorial Board at Constitutional Discourse blog.

E-mail: monika@condiscourse.com


[1] (3) Everyone shall have the right to the protection of his or her personal data, as well as to access and disseminate data of public interest.

[2] (4) The application of the right to the protection of personal data and to access data of public interest shall be supervised by an independent authority established by a cardinal Act.

[3] Hungarian Constitutional Court Decision 15/2022. (VII. 14.) [24]

Lilla Nóra KISS: Professional Ethics and Morality Can Prevent Social Media From Becoming Sovereign

When top Russian diplomat Maria Zakharova explains that George Orwell’s dystopian classic Nineteen Eighty-Four was written to describe the dangers of Western liberalism and not totalitarianism, we may feel as though we are watching an absurd Monty Python satire. In those parodies, artists question facts and overkill conversations with extreme statements to criticize an existing system and discourage the audience from becoming participants in the absurd comedy. While such plays used to primarily cater absurdity for entertainment purposes only, they are gradually starting to normalize the reality that has come of absurdity, which is definitely less enjoyable.

To substantiate this claim, our current reality can be broken down into multiple components of narration: the producers of a play can be viewed as analogous to the owners of media outlets and social media platforms, the directors to the censors, the main actors to the influencers—journalists, politicians, policymakers, and other public figures—and the members of the audience to the users of said media outlets or citizens.

As the reality slowly becomes absurd, members of the passively consuming audience become active participants of the play. Obviously, ownership makes profit-oriented decisions; the aim is to maximize the audience–and thus, the profit. The more extreme and negative the content is, the more people it reaches. The competition to become the most popular outlet slowly pushes the focus from professional, objective, and ethical information-sharing towards somewhat sensationalist content (also known as ‘clickbait’) as human beings struggle with the ‘limited rationality’ mindset, identified by Herbert A. Simon in 1947. Consequently, this competition promotes partially irrational decision-making capabilities. Essentially, humans have to make decisions based on the information available, but due to their cognitive and time limitations, people are vulnerable to the sources of information. Today, social media serves as a general source of news for Americans. According to the Pew Research Center’s survey conducted in January 2021, Facebook stands out as the regular source of news for Americans (54%), while a large portion of Twitter users regularly gets news on the site (59%). Since the resources and capacities are limited, platforms have the green light to filter and pre-digest the news for their users. The cherry-picked news comes from well-selected sources and is directly delivered to the users’ newsfeed. The filtered information behaves as a sub-threshold stimulus that unconsciously supports users’ interpretations of certain topics. Complemented by the content, the description of which lacks objectivity, users are easy targets of polarization. As a result, the demarcation lines between those who agree with a certain opinion and those who disagree become more acute.

In today’s age of surveillance capitalism – as Shoshana Zuboff named the “bloodless battle for power and profit as violent as any the world has seen” – the limit of professional ethics of journalists, politicians, and other influencers is a key question. Another significant point is the owners’ liability for intentionally (trans)forming public opinion. As Count István Széchenyi – often referred to as the Greatest Hungarian –famously expressed, “ownership and knowledge come with responsibilities.” In a world where all information is available on the internet and owners of digital platforms are free to decide what to show to or hide from the masses, ownership over information becomes the most powerful means to shape the future of society. There is no doubt that owners structure societies; the question is if they do it with moral observations or purely for their own financial benefits.

The former approach would be the idealistic scenario: it necessitates a social media environment where platforms’ owners do not intend to form the public opinion and therefore: (1) allow all forms of speech as free speech even if prone to expressing extremism, (2) users could pick and choose freely from millions of pre-generated information upon their consciously and explicitly preselected priorities (which obviously pushes the boundaries of limited human capacities and timeframes), and (3) would not tolerate or apply any cancel culture. This also inherently implies that (4) even personae non gratae — Latin for “people not welcome”— would be allowed to use these platforms even if their views are controversial to the views of the ownership and as such, considered undesirable on their platforms. This would also entail a lack of double standards and a state of objective fairness. At the same time, such an ideal form of social media management would not automatically excuse crossing certain thresholds, such as sharing hate speech content, child pornography, or any other criminal acts, as the platforms would still be legally obligated to take the necessary measures in enabling established public institutions to interact and restore the balance. By the conclusion of this description of the ideal social media platform, there should be no doubt that this utopian scenario does not currently exist.

In the latter case, however, without being overly pessimistic, the world becomes a worse place every day. In this sad but more realistic scenario, private entities are interested in playing with information and using readers’ limited rationality. As a result, owners can intentionally form a public opinion as a side effect of their profit maximization. Of course, the profit-oriented approach is the legitimate interest of corporations—there is nothing wrong with that – until profit maximalization happens in compliance with ethical and moral standards. However, this leads to a very interesting legal dilemma. On the one hand, corporate decisions on allowing or restricting content are legitimate based on Section 230 of the CDA. (The US law setting the standards for ‘decent communications’ since 1996.) However, on the other hand, that decision may lead to illegitimate consequences because corporations have no legitimate authority to act as sovereigns and to form, deform, or transform public opinion by using their power over information. To attain unmanipulated public opinion is, of course, unimaginable and unnecessary in general, but identifying the influencer is crucial. Yet, tracing the influencer is almost impossible in the virtual sphere, hence the question of accountability for these platforms.

Translating the situation into the language of legal theory, the debate is about the relationship between law and morals. Natural law theory holds that law should reflect moral reasoning and should be based on moral order, whereas the theory of legal positivism holds that there is no connection between law and moral order. A symbolic example that highlights the differences from a practical point of view is that Nazi Germany and the Stalinist Soviet Union – two infamous totalitarian regimes of the 20th century – were rule of law regimes from the context of a purely legal positivist interpretation. Under natural law, however, these states were not operating under the rule of law, and their laws were not valid due to the lack of morality of their content. This is the case because natural law requires morality to validate legal content, while legal positivism does not.

Reflecting the opposing views on the current issue, the legal positivist would raise the question, “what does the law say?” for the situation and would provide the ‘easy answer’: private entities, including the owners of social media platforms, are legally entitled to make discretionary decisions regarding the content they share or ban on their own platforms, regardless of the influence they exert over their users. However, natural law would require adding moral values of ‘good’ or ‘bad’, ‘right’ or ‘wrong’ to make an adequate evaluation and give a ‘legal’ or an ‘illegal’ answer. Does these private entities’ exercise of their freedom to influence their users by the content they share lead to legal or illegal outcomes? In addition, if technically an act is legal, does it constitute the use or misuse of corporate freedom under Section 230 CDA? In other words, does Section 230 license these corporations to shape public opinion? Is there any moral standard that the ownership should follow when making their private decisions based on Section 230, especially knowing that the decision may influence and manipulate users? Of course, it is difficult to measure morality as its levels are very relative. It is even more complicated to evaluate morality in the digital sphere. Yet even so, basal minimum moral standards would support both the ownership in making fair decisions and the creation of the most objective environment for the news cycle. Introducing content-neutral and impartial minimum standards based upon morality might therefore help shift the emphasis back to normalcy from a partisan path.

When the producers (owners) introduce moral principles to reach fairness, directors (censors) are free to manage their tasks within the frames of their professional ethics. The main players (the influencers: journalists, politicians, policymakers, and other public figures) are forced to serve the public interest instead of their own interests. The ownership has a huge responsibility for doing good within and for the society. Otherwise, the play becomes an absurd reality produced by quasi omnipotent owners, directed by unethical censors, and influenced by self-interested public figures.

Morality and law together can prevent social media ownership from becoming uncontested, illegitimate sovereigns. Saving the checks and balances to maintain a healthy balance between private and public is important. We could see what happens when the public-private balance is distorted. In communist dictatorships, private entities are weak compared to state actors and have an extremely narrow room for maneuver in their interest advocacy. In two famous communist countries, People’s Republic of China and the Russian Federation, privately-owned media is virtually non-existent: the balance is distorted and private actors are dependent upon public institutions. Dependency, in turn, leads towards toleration of oppression and ideology-based manipulation. As a result, absurd things ensue: the people of Russia may not know they have been invading Ukraine for roughly three months now. In China, Western “traditional” social media is geo-blocked and Chinese have their own platforms. Strong totalitarian states do not bear any private intervention to their decision-making. On the other hand, it is also a mistake when private actors overreach their competences and influence public opinion to serve their private interests.

There is evidence that most people prefer normalcy over extremes. That is good news. Normalcy requires people in the middle to keep a healthy balance between private interest and public interest. Professional ethics, morality, and ownership liability are able to prevent private entities from becoming the new sovereigns that influence alternative movies about digital absurdities.


Lilla Nóra KISS is a postdoctoral visiting scholar at Antonin Scalia Law School, George Mason University, Virginia. Lilla participates in the Hungary Foundation’s Liberty Bridge Program and conducts research in social media regulation and regulatory approaches. Formerly, Lilla was a senior counselor on EU legal affairs at the Ministry of Justice and she has been a researcher and lecturer at the University of Miskolc (Hungary), Institute of European and International Law for five years, where she taught European Union law. Lilla obtained her Ph.D. degree in 2019. The topic of the dissertation is the legal issues of the withdrawal of a Member State from the EU.

Her current research interests cover the legal dimensions of Brexit, the interpretation of the European Way of Life, and the perspectives towards social media regulation in the USA and in Europe.

Lilla Nóra KISS: Lex Facebook or Tax Facebook? Options beyond the self-regulation of IT companies

Tech giants might cause headaches… Especially, when we attempt to think simultaneously with the heads of consumers, states and tycoons. This post intends to reveal some dimensions of regulating social media from a legal point of view, or to present the proverbial ‘windmill fight’ against seemingly unapproachable powers.

An unregulated social media environment can lead to interesting debates in many legal dimensions. The reason is that the topic is full of unclear definitions and opaque relationships of different actors. Let’s start by saying that the original purpose and function of social media sites is to connect members of the community and provide them multiple options to keep in touch. The spread of the internet has made it possible to share (and receive) information everywhere and any time. Social media sites became more and more widely spread and offered options for sharing news besides personal information, too. Therefore, the content of the platforms became extended. Thus, the supply has expanded. Human nature (especially our visceral curiosity about the lives of others) has created an ever-widening consumer market for these expanding services. So we can say that demand has also increased. The rapid pace of development and gaining market share has encouraged social media providers to change, provide more, even with an increasing number of specialized platforms. In addition, more and more features have been incorporated into the system, thus improving utilization and – by now,

psychologically proven – strengthening addiction in the daily lives of many people. This effect and the “phenomenon” of social media is well illustrated in the Social Dilemma documentary and in Disconnect.

All these developments have taken place in such a way that each concept has not been clarified or defined, and the (minimum) regulation that seems necessary has not been developed.

The strange situation is that the real concepts of service and consideration and the relationship of the seller the buyer and the product in the world of social media are unclear. I mean that (as Richard Serra put it in a 1973 radio show about television commercials reaching millions) when a product is free, the question arises as to whether we are the product. I believe that Serra’s statement about television advertising is exponentially true in the digital age.

The question shall be asked that if we (consumers, users) are the products, then who the seller is. A social media provider? If the social media sites do not sell, are they just mediate between the products (users) and the real seller(s)? Today, in an increasingly conscious society, it is perhaps no longer news that our personal data (and all the information associated with us) represent economic (monetary) value, it is considered a commodity. (See some thoughts on the surveillance capitalism by Shoshana Zuboff.) It is valuable that buyers unknown to us(ers) can get to know us, including our habits and decision-trends. By knowing us, they have the opportunity to influence us and our decisions via targeted ads, based on our browsing habits and history…

The scientific and social concern behind the analysis is that support the measurement of the behavior, the result could be dangerous as could affect the masses of people even in a cross-border manner. The free (unregulated) use of the know-how of manipulating millions via the internet and applications – such as the social media apps – is more than concerning…

If Artificial Intelligence serves and supports the understanding of the measured behavior of millions, it gives hardly controllable power to those who have control over that information (Big Data). Thus, the result could lead anywhere. As a simple conclusion, we might become a target audience (or – as you like – victims) without even realizing it. The advantages and disadvantages of direct marketing have been exploited by the science for decades. However, with the rise of social media sites, the traditional methods and techniques of direct marketing could make users (consumers) vulnerable with unforeseen efficiency. Especially nowadays, the effect is no longer just an incentive to buy, but even political manipulation. (Sorry, this option for fabricated reality reminds me on the famous movie Wag the Dog from 1997…) And finally, this option of manipulation leads us to the general legal and the constitutional dimensions.

Firstly, it can be seen that we can even get caught up in the clarifications of the concepts. In the context of data profiling, the issues of privacy and related data-protection concerns immediately arise. For example, the data and information we provide may be used (against us). If users of social media sites became aware that their data and profile were of serious property value and the social media sites profit off them, would they provide those personal data (at all)? If they were willing to disclose their personal data and other information knowing the above, would they do so for free? It is clear that the data have real economic potential. Could personal data be considered property? Anyone who knows the consumer (user) can reach him/her, regardless of whether the content is intended (needed) by the user or whether he/she intended to receive e.g. political messages. So, there are those who have an economic interest in learning about personal information or reaching targeted individuals of a specific profile. Thus, in addition to data protection, ownership issues may also arise – in particular with regard to the disposition of the subject matter of the property. While I’m sure that considering “data as a subject of property” opens Pandora’s box in legal thinking, this question cannot be avoided when we are talking about social media regulation(s)… The data protection dimension opens up privacy issues, in which context the right to disconnect that has also become an important aspect since the 2000s. The significance of the right to disconnect has increased especially in the changing work-life environment due to COVID-19.

Clarification of concepts is also important from a consumer-protection point of view. It is necessary to examine the extent to which users could be regarded as consumers. Here, the legal protection to which they are entitled, might be different. The measurability of influence, the protection of minors, and the tightening of the legal and ethical framework of consumer manipulation are particularly important. An essential accessory to both data protection law and consumer protection law is an appropriate level of information provided for the user/consumer. The appropriate level of information includes the liability of leaders to raise the awareness to vital information important to the public that may be understood as a basic need for consumers. The right to be informed shall be treated as minimum requirement of the service providers’.

The constitutional dimension mentioned above is relevant in several respects. On the one hand, users are not only consumers, but mostly citizens of certain states. Democratic states and legitimately elected leaders typically have responsibility for informing citizens in real time about matters of public interest. Here, of course, one can reflect on how information is provided and analyze (and evaluate) certain conceptual elements and levels of matters of public interest, but by abstracting from the questions of detail, perhaps we can examine the “phenomenon” of social media as a whole. By the term phenomenon, I want to express the elusiveness of the social media. Social media used to serve to facilitate getting to know each other and rekindle old relationships. However, by now, social media is playing an active role in providing information by allowing news to be published and shared (disseminated). Public involvement is also important in the active protection of citizens, especially certain vulnerable target groups (e.g., minors).

However, the involvement of the state and politics appeared in a completely different dimension as we might have imagined. Previously, information spread through the press and classical channels of media. By now, politicians and the state institutions became active users of social media. However, the legal framework was not tightened at the same time. In particular, the role of the sites in disseminating information has grown in importance over the last few years (see e.g. Trump’s election as president, migration, the Brexit referendum, the COVID-19 pandemic, and the Trump-Biden “election war”). The fact is that political actors and state representatives have become active users of social media in order to share information. Here, however, the basic legal requirements of liability are lacking. Who is responsible for the shared content? Who needs to verify the verity of the content? Who is responsible for spreading false (fake) news? A free press comes with state guarantees and strict accountability rules in our modern democracies. However, what about social media?

In addition to civil liability, misinformation and even its criminal consequences (e.g. incitement to possible crimes, incitement against the community, hate speech and other hate crimes) are also important regulatory considerations. The other side of the constitutional dimension is freedom of speech and freedom of expression. There are well-established civil law rules for this, determining the limits of freedom of expression (e.g., violations of the right to privacy). There are also criminal law frameworks that primarily address the categories of defamation and hate crimes.

It could be seen that there are substantive public and private law aspects of this topic. Constitutional law, criminal law, civil law, data protection, and consumer protection all allow for a legal assessment of possible regulation of social media.

In procedural terms, however, we face serious shortcomings. On the one hand, the cross-border nature of the phenomenon raises questions of competence: who is entitled to regulate social media and by what means, to what extent, with what personal-territorial-temporal scope and who is entitled to control the effectiveness of regulation, deal with possible abuses and, finally, how could the rules be enforced.

Could a global phenomenon be addressed with local (state or regional) solutions? The European Union is ready to regulate social media. The Member States agreed that the regulation of the social media is necessary. Some, including Hungary and Poland, have a stricter approach to the issue (see concepts of a so-called Lex Facebook). In parallel with the EU, the regulation, accountability, and controllability of social media in the USA is of public and professional interest. Social media (and other online services) have so far operated in slightly different way on the two continents due to the GDPR. For example, imposes significantly stricter rules on the processing of personal data than U.S. federal and state-level solutions. Fragmented legal solutions always raise dilemmas related to efficiency and powers (or the fear of them).

However, the question arises as to whether globally operating platforms can be judged and regulated at a regional level without compromising consumer rights. By the latter, I mean, for example, the consumer protection aspects of geoblocking, which also raise the theoretical question of categorizing citizens in terms of access to content.

From now on, another question to be asked will be whether the regulatory framework can be effective from the top-down or just bottom-up. There are many areas of self-regulation in social media and there are soft legal solutions for other digital companies as well. Recent political events have highlighted that these are not necessarily effective solutions, self-regulation is only a smoothing away of the conflicts that might arise without effective solution. However, forcing a regulation at the state level could be risky and the efficiency could be questioned due to the cross-border nature of the platforms. Are there tools for regulation that could be borrowed from other areas in an inclusive manner?

Could tax sanctions work (e.g. a kind of ‘Tax Facebook’ instead of Lex Facebook?)? In my view, taxation (by states) could be risky without a common European tax scheme for digital companies. This is a rough road that has been impassable for decades because of competence issues of the EU and the Member States. However, EU tax-rules on dotcoms would allow social media providers to be taxed on the part of states (or the EU) and sanctioned through tax instruments if they wish to participate in informing citizens (and citizens) about news of public interest. I raise the issue of sanctions along the lines of spreading dis- or misinformation… Financial instruments could force social media to pay tax on its income from the use of citizens’ data if the platforms want to become an active information- and content-sharing site (and not just a site for connecting people). This could be interpreted as poking Facebook and other social media instead of punching them with strict rules… However, this is difficult to achieve in a fragmented framework with different tax rules and rates in different Member States who do not want to harmonise tax rules due to sovereignty claims. The common tax issue, even if only in the digital field, leads us to constitutional dilemmas. The ​​division and transfer of competences/powers is a sensitive area of EU law. The Gordian knot could be cut through with a cheap Chinese solution: geoblocking on social media. With that we could say: “Mischief Managed”!

The problem there is that both the European and American systems are designed to meet the needs of consumers (in trade and politics, too). Modern politics wants not only to serve citizens in this regard, but also to use social media interfaces for information or even campaigning…

All this requires the development of common minimum standards in the areas indicated above, both in substantive and procedural terms. The regulation and the taxation of the companies are seemingly not the best possible solutions… Maybe new solutions could be borrowed from other fields, in an inclusive manner… Well, the clarification of concepts can be a good starting point in that change!

Lilla Nóra KISS is a counselor on EU legal affairs at the Ministry of Justice. Formerly, she has been a researcher and lecturer at the University of Miskolc (Hungary), Institute of European and International Law for five years, where she taught European Union law.

Lilla obtained her Ph.D. degree in 2019. The topic of the dissertation is the legal issues of the withdrawal of a Member State from the EU.

Her current research interests cover the legal dimensions of Brexit, the interpretation of the European Way of Life, and the Digital Single Market.

Márton SULYOK: Size Does Matter (?!)

Some European Debates on the Use of Religious Symbols in the Workplace

On 25 February 2021, Athanasios Rantos AG (CJEU) has issued an opinion in two German PRPs (preliminary ruling procedures) on whether an employer’s internal ‘neutrality policy’ can at all prohibit the wearing of large religious symbols, while being more lenient regarding smaller, more modest ones. These neutrality rules obviously encompass all forms of clothing and ‘office wear’ not only specifically identifiable religious symbols as such regulations normally extend to all forms of political, religious and ‘world views’. (Please note: The opinion is not yet available in English – below, I’ll limit myself to the Hungarian text and its translations)

Constitutional scholarship, free speech advocates and public opinion all approach this topic with care from many aspects, especially with regard to the wearing of such symbols in various public spaces, from universities and educational institutions to public administration offices. Thus, stances regarding “religious dress” (broadly speaking) divide Europe and a multitude of national practices and constitutional, legal rules have become public knowledge due to the jurisprudence of the European Court of Human Rights (Court, ECtHR). One among these decisions has been the famous landmark of Leyla Sahin v. Turkey (2005), wherein the Court has made the following statement (para. 35), evaluating the relationship of secularism and freedom of religion on the basis of identity. They argued that “[t]hose in favour of the headscarf see wearing it as a duty and/or a form of expression linked to religious identity.” In other words there is a right to the respect of religious symbols that is inherent to freedom of religion on an identitarian basis. The fact that the Court alludes to a “form of expression” linked to this identity transfers the discourse into the realm of free speech, which further complicates how we might interpret rules that limit wearing religious dress or similar symbols. (With these in mind, Erica Howard, a legal expert of the European Commission has looked at these issues in her 2017 study in a narrower European context, tailored to the EU, also looking at national practices.)

The two German cases (C-341/19 and C-804/18) providing the grounds for the AG opinion mentioned above, now touch upon similar issues under German law in light of EU law regarding the wearing of an Islamic headscarf under the neutrality rules of two companies (a drugstore operator and an association in charge of maintaining kindergartens). The EU law in question, regarding which the preliminary questions of the two German labor courts were raised is the 2007/78 EU (nominally EC) Directive regarding establishing a general framework for equal treatment in employment and occupation.

According to the AG’s opinion, relevant restrictions in employers’ internal regulations in this regard to not realize discrimination if related to any manifestations of employees political, religious or other world views. (This is based on previous cases such as Achbita or Bougnaoui.) However, this argument should be brought further in relation to the visible wearing of any symbols pertinent to the above, in the instant case, religious symbols. After visibility has been dealt with in CJEU practice in the famous G4S case, the focus visibly shifted to their size and ‘conspicuousness’. The AG opinion held that in the instant cases restrictions affected the ‘office wear’ in terms of any signs of religious views visible to third persons, clearly referring to this rule as part of maintaining client-relations. (para 51-52.) At this point, the opinion underlined that the current CJEU jurisprudence does not directly entail in cases similar to the one at hand that discrimination could not be established regarding rules banning the wear of Islamic headscarves. (para. 56.)

In the second part of the opinion, paras. 71-76 contain some key arguments that need to be emphasized. It was argued by the AG that the CJEU did not yet decide on the issue of rules banning the wearing of large symbols of political, religious or other world views, and that this logically means that the following issue needs to be examined: whether small-sized symbols can in fact be worn in the workplace in a visible manner. The AG refers here to Eweida and others, a case decided in 2013 by the ECtHR, where modesty did suit the context of declaring a violation of Article 9 ECHR. In Eweida, the respondent UK was found violating the Convention for sanctioning modest religious symbols otherwise unsuitable to tarnish the professional image of the wearers. Consequently, the following argument is made: employers’ neutrality policies – in the context of their client relations – are not inconsistent with their employees wearing small, modest religious symbols that are not detectable at first sight. Here it is argued that size does matter, as the AG is of the opinion that small symbols cannot insult those clients of the company that do not share in the religion or views of the employees wearing them.

Cutting back to visibility and the relevant ban, the AG states that if visible signs can be lawfully banned under G4S, then based on freedom to conduct a business (under the Charter), the employers are free to explicitly and exclusively ban the wearing of big symbols. So size does matter (?), but the only real question is who is to say what is big or small. The AG is of the opinion that it shall not be the CJEU, such assessment is (duly) deferred to national courts based on the totality of circumstances, also accounting for the environment in which the symbols are worn. One thing is for sure, though: the size of the Islamic headscarf is not small.

What also is not small is the number of contradictions in the opinion, considers Martijn van den Brink’s in his latest post on Verfassungsblog.The only positive aspect” of the opinion, he writes, deals with specifying the relationship of national constitutional and European rules protecting freedom of religion, and in this context the opinion looks at the issue whether national constitutional law rules protecting the freedom of religion can be interpreted as “provisions which are more favourable to the protection of the principle of equal treatment than those laid down in this Directive” in light of Article 8 of the Directive. The AG’s conclusions that they cannot. In this context, it was also raised whether internal rules of the employer in C-341/19 are superseded by the rules of the national constitution, which might have priority over neutrality policies based on the freedom to conduct business in reliance on the standards set by Sahin (where wearing the headscarf was considered by the employee as a religious duty).

Regarding C-804/18, the AG’s opinion sets forth that the German Federal Constitutional Court (GFCC) emphasized that the freedom to conduct a business under the Charter can no longer be assigned priority over freedom of religion in all cases, specifically where neutrality is imposed by the employer in client interactions, but the lack of such neutrality would not lead to any economic disadvantage. Based thereon, the GFCC emphasized that situations created by employers’ intent to abide client requests resulting in services being rendered by an employee not wearing an Islamic headscarf does not fall under the “genuine and determining characteristic” rule of Article 4 of the Directive. In the AG’s opinion, it is not contrary to the Directive if a national court applies the provisions of the national constitution to examine internal regulations of a private company prohibiting the wearing of symbols referring to political, religious or other world views, but any occurrence of discrimination should be duly assessed by the national courts as well.

Regardless of what arguments the CJEU’s judgment will finally rely on (given that the AG’s opinion is non-binding on the Court) AG Rantos’ opinion surely adds another layer to the European debates on wearing or displaying religious symbols and the right to respect these symbols as “forms of expression” tied to one’s identity under the afore-mentioned Sahin judgment of the ECtHR.

If we shift the identity-focus of the arguments from the individual to the state, France comes to mind. The mystery of Laity (laïcité) – often misunderstood abroad – is an inherent element of French constitutional identity – i.e. the constitutional principle of secularism – and defines many aspects of the operation of the State. Debates resulting from the above questions have started much longer ago here than elsewhere in Europe. Legal debates in concrete cases have surfaced in terms of crucifixes in classrooms, or modest religious symbols hanging in the necks of parent chaperones on a class outing, but the Islamic headscarf (foulard) – just as in the above German cases – and the full-body veil (voile intégral) were all affected in court battles and ensuing legislation. Besides lawmakers, the Constitutional Council and the Council of State have both said – sometimes with different points of view – their parts as early as 1989 in the infamous case of the “Foulards de Creil”, or then in 2010 regarding the constitutionality of a ban on wearing full-body veils in public spaces for public safety reasons.

Most recently, the 2020 projet loi (draft bill) on reinforcing the principles of the Republic contains several provisions regarding freedom of religion that expressly originate in the 1905 law that codified the separation of Church and State, thus introducing Laity into French constitutional tradition but in its original form containing no limitations on the “porte des signes”, the wearing of (religious) symbols. Prohibition only surfaced later on, in different but familiar contexts: at first, in education (cf. Sahin). 1882 was the year when the law on a laicized education was born, until a 2004 law has prohibited the wearing of all “ostensibly manifest” (i.e. clearly visible) signs, logically leading to a more lenient approach toward more discreet, modest (small-sized?) signs. In relation to the workplace and employment, the 2020 draft bill, for example, outlines that in the course of providing public services, the principles of neutrality and laity need to be observed. In terms of private companies, if their employees engage in client relations, restrictions similar to the ones mentioned in the German cases can be applied, but – if a private company should provide a public service by law, it shall also observe rules regarding neutrality and laity, according to the draft bill. Obviously, the legislative debate of this draft bill is not over yet, nothing is set in stone, the Senate shall decide on it sometime in May this year, but until then, scholars are left to work with this concept.

Based on the events of the past weeks, we can conclude that the size of the debate is expected to grow, and it does matter which previously resolved issues will gain new interpretations in light of new approaches. Based on the above, it is not at all evident that the wearing of religious symbols is a one way street, a matter of principle manifesting itself in forms of expression tied to one’s identity or rather – based on the German cases described above – “size does matter” and in some contexts there is no “one-size-fits-all” solution.

Márton SULYOK, lawyer (PhD in law and political sciences), legal translator. As a graduate of the University of Szeged, he has been working for his alma mater in different academic and management positions since 2007. He is currently a Senior Lecturer in Constitutional Law and Human Rights at the Institute of Public Law of the Faculty of Law and Political Sciences (Szeged), and the head of the Public Law Center at MCC (Mathias Corvinus Collegium) in Budapest. He previously worked for the Ministry of Justice in Budapest and has been an Alternate Member on the Management Board of the Fundamental Rights Agency of the European Union (2015-2020). E-mail: msulyok@mcc.hu

Márton SULYOK: “How to tackle IT?”

On the “state-big tech” debate in social media regulation

The title wishes to sum up the literal ‘it-debate’ (the IT-debate, if you will) of the century. Many international events starting up the new year provided inspiration for the below discussion of some of the legal and policy issues in this domain.

Let’s start in Hungary, where an end-of-January session of the so-called Digital Freedom Committee discussed unlawful practices of big IT companies, often also referred to as TAGAFA or the big five and their affiliates. (To accommodate the trend, I will just refer to them as “big tech” from now on.) Then February kicked off with a revelation by a domestic news outlet that by Spring the draft bill of the Hungarian “Lex Facebook” will be rolled out of the Ministry of Justice. (For now, the third wave of COVID-prevention seems to have intervened, but the prospect of such a law is very much intriguing, given the most recent developments in privacy protections regarding primarily the physical space, introduced through constitutional amendment no. 7., and the subsequent adoption of Act No LIII of 2018 on the protection of privacy)

Now, to understand where the debate regarding any state regulation of big tech companies and social media platforms owned and/or operated by them, let’s turn back the hands of time to about five years ago, to the United States. The Cambridge Analytica scandal – also involving Facebook – and its continuing ripple effects, leading up to the election of Donald Trump and the ensuing, and deservingly infamous, Mueller Report on the alleged Russian interference with the presidential elections relit the spark of the ‘Social Dilemma’, this time, in term of state “restraints” on traditionally self- regulating social media platforms and the big IT-companies behind them in the face of the free speech protections provided under the First Amendment.

Almost simultaneously, on the other side of the Atlantic, in cooperation with the European Commission, pioneering public commitments were undertaken by the big tech companies present at that time in the European market as part of a code of conduct to eradicate (broadly speaking) hate speech from their sites, leading up to its fifth evaluation cycle (2020). Every so often it might happen that political speech on these platforms incites hatred or communicate ‘fake news’, therefore it is all the more important that their corporate and state regulation be possible, mutually reinforcing each other in order to avoid further escalation and to impress upon the significance of prevention.

Rule-of-law checks (rather controls) on freedom of expression on social media however cannot fully slip out of the hands of the state and state institutions, because this would e.g. take away the possibility of legal redress against corporate actions or decisions taken or made in this regard. In this regard, renowned Hungarian media law and freedom of speech expert, Professor András Koltay said in an interview given to ICRP Budapest already in 2017 that “Google and Facebook have implemented their own separate “legal systems” that are quicker and more efficient than the legal system of any state; nevertheless, the decisions adopted by them are neither transparent nor subject to any procedural guarantee of the rule of law.”

Corporate and state competences in this sector are concurrent and also overlapping, thus becoming shared as well. The creation of the Facebook Oversight Board (FOB) is a clear example of this concurrence with state authorities. Tamás Pongó argues that functional similarities are apparent between the FOB and national constitutional courts, the ECtHR or even the US Supreme Court. He is definitely right in stating that where a market actor establishes an autonomous ‘quasi court’, a new era begins in the in limiting freedom of speech. The FOB may indeed be very similar in its function to a Supreme Court. Maybe this is why Juncal Montero Regules so cleverly identifies its decisions – in the context of hate speech moderation – as the “Marbury v Madison of platform governance” on Verfassungsblog. In her post, she also pointed out how the FOB failed to take into consideration the “nature and reach” of the posts examined, which – she argues – inform any decision about context. If we take a different approach to the context of these cases, the context of their reception and perception is also important. A recent Ars Technica article demonstrated that in 4 out of 5 cases the FOB ruled against the company’s decision, which might or might not be – as Regules argues in reference to the Board itself – part of “a PR exercise from Facebook”.

Whatever one may think about the eventual success or failure of the FOB, my previous argument that there is a concurrence – better yet competition – between corporate and state bodies to regulate social media still stands. We can see many more examples to this world-wide as well. In recent months and weeks, we had seen news reports regarding how states try and to some extent regulate the workings of social media and their big tech background – just consider the ban on the former US president’s accounts due to his involvement in the events at the Capitol on 6 January.

Poland has again made it into the crosshairs of public opinion when the deputy minister of justice, Sebastian Kaleta published an honest confession in Newsweek online about why he decided to regulate big tech companies. He referred to John Milton’s famous parliamentary speech Aeropagitica, in which the famous orator pleaded for freedom of the press of limitations imposed by the government in mid-17th century England. The Polish politician then addressed the gatekeeping function symbolizing control, which – he argued – is now (on the internet) no longer in the hands of the state but in those of big tech companies, and this is “the stroke that broke the camel’s back” leading to the Polish draft bill of the state regulation of social media. Based on what is currently known about the draft law (announced at the end of January) from secondary sources (available in English), without an actual legislative text to look at, is that it purports to establish a Freedom of Speech Council, of six members elected by the Sejm (with a 3/5 majority) for six-year terms from the fields of law and media. The Council is to be designed as a sort of appeals forum against decisions made by social media platforms and its proceedings and competences would be tailored to Polish national law. In other words, it would serve as a “reverse FOB”, a state equivalent of that body.

Based on all of these examples then the oh-so-often heard questions rings in the ears of everyone: quis custodiet ipsos custodes? Who watches who then? I had this in mind when I referred to the 2016 EU Commission code of conduct above. This EU initiative and its afterlife in terms of regulating the European digital market is not only important in and of itself. It is significant, because it has imprinted on Member State laws and legislative thinking as well. Germany, for example adopted their Act to Improve Enforcement of the Law in Social Networks in 2017 (which then entered into force in 2018), also called NetzDG, regarding which certain early caveats have also been voiced by Stephan Theil on Verfassungsblog.

As one of the gatekeepers of online freedom of speech, the European Court of Human Rights has many times decided in cases involving Hungary as well, evaluating national rules or their absence in this respect. In the Member States of the Council of Europe, also based on some of the above examples, we can see that the state answers to regulate online free speech platforms (especially social medias) and big tech companies behind them are manifold, but this is the situation is other parts of the globe as well. Everyone tries to cope with these new-found challenges as best as they can.

In Turkey, news reports warned that advertising activities of Twitter, Periscope and Instagram have been halted and their platforms affected by bandwidth and access-limitation, seeing as they did not establish or appoint a national contact office under legal obligations binding on them to do so adopted by Turkish legislators last October. Besides the above companies, Facebook only complied with these requirements at the end of January, after a longer period of resistance to do just that.

From Australia, news reports let us know that Google has taken blocking access to its search engine on a nation-wide scale in response to plans of a unique piece of Australian legislation (i.e. the News Media and Digital Platforms Mandatory Bargaining Code) requiring big tech companies to pay royalties for news content displayed on their platforms ‘taken’ from news agencies and services. According to information made public by BBClegislators of the “news code” have referred to Google’s monopoly as justification, the lack of its market competitors and to a government classification of the Google search engine as “near essential utility”. In response to this act, Facebook has banned news feeds in Australia on a nation-wide scale this past week only to lift the ban after a few days in what can be easily interpreted by an astute observer as a “show of force” to reach a deal with the government. Regardless, the law is before the Senate awaiting further debate during the coming weeks. In other news, we can also read that Google has already agreed to pay royalties to Rupert Murdoch’s News Corporation for content, so ‘mandatory bargaining’ already works to some extent. This last example is only tangentially connected to online free speech on social media platforms but – in the intended context of the proposed legislation (possibly to counter the spread of fake news (?) – nonetheless raises important questions regarding access to public interest information in the form of news. Quality, frequency and accessibility of news is what fundamentally shapes public opinion, and as such public opinion that is reflected on social media platforms based on these news.

Based on the above, the tug-of-war between state regulators and corporate giants is very much visible. Both sides want to dominate the online market of opinions. The century’s ‘IT’-debate however cannot be resolved by ‘mutually assured destruction’ in this field. It should much rather be approached by the creation of a fair and equitable balance between corporate self-regulation and state-imposed rules. To use a very chic turn of phrase in constitutional discourse on free speech, none of the two sides can be ‘captive audiences’ of the other, because if this should come to be, only freedom of speech would suffer the consequences.

In a constitutional legal sense, at least based on the practice of the Hungarian Constitutional Court, freedom of speech is not an absolute right, but as the so-called ‘mother-right’ of communication rights it shall only give way to very few rights. “Although the privileged place accorded to the right of freedom of expression does not mean that this right may not be restricted – unlike the right to life or human dignity which are absolutely protected – but it necessarily implies that the right to free expression must only give way to a few rights; that is, the Acts of Parliament restricting this freedom must be strictly construed. The Acts of Parliament restricting the freedom of expression are to be assigned greater weight if they directly serve the realisation or protection of another individual fundamental right, a lesser weight if they protect such rights only indirectly through the mediation of an institution, and the least weight if they merely serve some abstract value as an end in itself.” – argued the Court already in 1994, and these lines have a very familiar ring to them in light of current debates globally, but in Hungary as well.

When we address big tech issues, their cooperation with the respective states is a key factor, ensuring compliance. They shall not only have their users and employees comply with their internal governance norms. States, in turn, shall provide legroom for self-regulation and for CSR to penetrate into the domains of free speech checks, but with certain clear limits. Red lines need to be drawn, serving as gatekeepers of the gatekeeping functions for social media platforms and the big tech companies behind them. Traditional state controls of these forms of expression must not be allowed to be exclusively controlled by corporate interests, functions and forums, certain essential state functions in this realm shall remain, complemented by corporate tools to the extent necessary.

Márton SULYOK, lawyer (PhD in law and political sciences), legal translator. As a graduate of the University of Szeged, he has been working for his alma mater in different academic and management positions since 2007. He is currently a Senior Lecturer in Constitutional Law and Human Rights at the Institute of Public Law of the Faculty of Law and Political Sciences (Szeged), and the head of the Public Law Center at MCC (Mathias Corvinus Collegium) in Budapest. He previously worked for the Ministry of Justice in Budapest and has been an Alternate Member on the Management Board of the Fundamental Rights Agency of the European Union (2015-2020). E-mail: msulyok@mcc.hu