István ÜVEGES: Social and political implications of the use of artificial intelligence in social media

Artificial intelligence-based algorithms are now of inescapable importance in many fields. Their applications include automatic content recommendation systems for streaming providers, chatbots (e.g. ChatGPT), Google’s search interface, etc. The applications listed above are designed to help users make decisions, find information, or organize the vast amount of information available online to make it easier to find what they are looking for. In fact, many of the most popular online services are nowadays unthinkable without the use of artificial intelligence since they make the navigation efficient and accessible to all in the vast amount of data present in the online space.

In addition to the above, however, other uses of digitized data can be envisaged, which are less obvious and are not necessarily aimed at satisfying the needs of the average user, but rather at serving market or political interests, even though (conscious or unintentional) invasion of privacy.

In the world of artificial intelligence, and specifically in its subfield of machine learning, the quantity and quality of training data is a key factor. In the traditional sense, privacy in the online / digital space can be defined as private conversations, social media posts and information related to the individual. However, in addition to these, users leave behind several online footprints that are either not protected at all or are protected by inadequate means by the legal rules on privacy.

Examples include data sets such as browsing history, content viewed, ‘liked’, individual contact networks, geolocation data, etc. Until the last decade, this information has existed mostly in isolation, on separate servers, under the ‘authority’ of different data controllers or collectors. However, from the point at which these data sources became interoperable (whether through the activities of data brokers or otherwise), they have given rise to a mass of data (mostly referred to as ‘big data’) which nowadays offers the possibility of psychological profiling of the source individual, micro-targeting of ads and content, or even the use of psychometric methods.

Unlike traditional information that people are basically aware of sharing (for example, uploading a photo), this data is often generated in ways that the user is not necessarily aware of. Nevertheless, by using it, machine learning algorithms can be a much more effective tool than before for profiling an individual, whether it is (automatically) recognizing and attributing values to a person, be it party preferences or other interests. Mapping groups thus formed (e.g., by unsupervised machine learning algorithms) back to the individual is the key to developing effective and automated opinion-forming techniques.

The process by which data is “turned into gold” in the right hands[1], and the ways in which it can be used to serve business or policy interests is a multi-stakeholder process that involves a range of technological innovations, emerging trends, regulatory challenges, and perspectives.

In response to the insatiable demand for data from machine learning algorithms, there is now an entire industry dedicated to collecting and selling user data in the most efficient and detailed way possible. Given the rapid progress in both IT and artificial intelligence research, it is reasonable to assume that the problems we are already seeing (data leaks, manipulation, micro-targeting, psychometric profiling, etc.) will only get worse in the future without the right regulatory environment or may be replaced by new challenges that are not yet foreseen.

Among the (already existing) uses of artificial intelligence that are of concern, this paper presents some of the ways in which it can be used to influence election outcomes. The issue of political polarization in social media is also discussed in more detail.

Electoral manipulation

In modern democracies, weaponized / manipulative AI poses a serious threat to the fairness of elections, but also to democratic institutions more generally. In the case of elections, the outcome can be influenced in several ways, in line with the interests of a third party.

The attacks, carried out by artificial intelligence used for malicious or even economic, political interests, can take the form of “physical” attacks (such as paralysis of critical infrastructures or data theft), or psychological effects that poison the voters’ trust in the electoral system, or discredit certain public actors[2].

In the present context, micro-targeting refers to personalized messaging that has been fine-tuned based on previously collected data about a given user, such as an identified psychological profile. Messages targeted in this way are much more likely to influence or even manipulate opinion than traditional advertising techniques.

This is exemplified by the suspicious cases of abuse uncovered by the Mueller report[3] in the US in connection with the 2016 presidential election, one of the main arenas of which was/is social media platforms.

The heightened concern about such activities is illustrated by the fact that, following the introduction of the GDPR[4], several EU Member States have initiated investigations against companies involved in data collection. For example, the Irish Council for Civil Liberties (ICCL) report[5] raises serious concerns about the activities of Google and other large-scale operators whereby data collection companies auction information about users, linked to their real-time geolocation, to potential advertisers and then transmit the data packets to the ‘winning’ bidder (Real Time Bidding – RTB). In several of the cases studied, the data transmitted in this way included sensitive health characteristics such as diabetes, HIV status, brain tumors, sleep disorders and depression[6].

The report found that in some cases, Google’s RTB system forwarded users’ data packets (which may have included the above-mentioned sensitive data without filtering) hundreds of times a day. The value of the data, and the seriousness of the leak, is illustrated by the fact that (also according to the report) it was used by some market/political actors to influence the outcome of the 2019 Polish parliamentary elections.

In doing so, OnAudience used data from around 1.4 million Polish citizens to help target people with specific interests when displaying election-related ads. According to the company, although the data packets were processed and transmitted anonymously, they were still uniquely identified to specific, real individuals. Moreover, these identifiers can be linked to the databases of other companies and thus continue to form a single profile[7]. This implies not only a threatening market behavior in terms of compliance with the GDPR, but also in terms of violation of privacy rights.

Opinion bubbles and political polarization

In addition to the above, it is also significant that social media platforms, to maximize users’ time on the platform, typically present content that best matches the personality of the user, i.e., that is most likely to be of interest to them.

This kind of (AI-enabled) content pre-screening has highlighted two new and important problems in recent years. The first is the problem of the often-false positive feedback generated by the homogeneity of the ranked content, and the second is the issue of political polarization often associated with it.

The former is driven by the phenomenon that social media platforms are making it possible for people to connect with others who share a similar worldview to their own on an unprecedented scale. This kind of social selectivity, coupled with the content filtering technologies[8] of the platforms, results in the creation of psychosocial bubbles that essentially limit the extent of possible social connections and interactions, as well as exposure to novel, even relevant information[9].

This phenomenon has been studied since the 2010’s, mainly based on informatics and structural measures of online behavior and social networks[10]. Among the later research, the Identity Bubble Reinforcement Model (IBRM)[11] stands out, with the dedicated aim of integrating the social psychological aspects of the problem and human motivation into the earlier results. According to this model, the expanded opportunities for communication and social networking in social media allow individuals to seek social interactions (mainly) with people who share and value their identity. This identity-driven use of social media platforms can ultimately lead to the creation of identity bubbles, which can manifest themselves in three main ways for the individual:

  • identification with online social networks (social identification),
  • a tendency to interact with like-minded people (homophily)
  • and a primary reliance on information from like-minded people on social media (information bias).

Within social media, these three elements are closely correlated and together reflect the process of reinforcing the identity bubble.


The data generated online can also be used to make predictions about users’ personality traits. One of the priority areas for these is psychometric use. This is closely related to the use of the online footprint (and its connection with the right to privacy and confidentiality) and is now also known as a possible technique for influencing voter opinion.

Psychometrics (psychometrics – psychometry) is the field of psychology that deals with testing, measurement, and evaluation. More specifically, the field deals with the theory and techniques of psychological measurement, i.e., the quantification of knowledge, skills, attitudes, and personality traits. Its classical tests aim to measure, for instance, the general attitude of employees in a work environment, their emotional adaptability, and their key motivations, but also include aptitude tests to assess the success in mastering specific skills, or classical IQ tests as well[12].

In the context of social media, and big data in general, the concept came to the fore mainly in the context of the 2016 US presidential election, along with another technique, micro-targeting.

The name of Cambridge Analytica, which first received significant media attention in July 2015, shortly after the company was hired by Republican presidential candidate Ted Cruz’s team to support his campaign, is inescapable on this topic.[13]. Although the campaign was unsuccessful, Cambridge Analytica’s CEO claimed that the candidate’s popularity had increased dramatically thanks to the company’s use of aggregated voter data, personality profiles and personalized messaging / micro-targeting techniques. The firm could also have played a role in shaping the outcome of the Brexit campaign according to a familiar scenario[14]. In 2016, it was also suspected that US President Donald Trump had also hired the company to support his campaign against Hillary Clinton. In this context, there are reports that Cambridge Analytica employed data-scientists who enabled the campaign team to identify nearly 20 million swing voters in states where the outcome of the election could have been influenced[15]. Winning voters in these states could ultimately and significantly boost Trump’s chances in key states, as well as in the general election[16].

The company also claims that one of the keys to their success has been the combination of traditional psychometric methods with the potential lies in big data. Their free personality tests, distributed on social media platforms, promised users more information about their own personality traits at no cost[17]. The data submitted could then be linked by Cambridge Analytica to the name of the submitter and a link to their profile[18].

The resulting data set (supplemented by other public and private user data) allowed the company to classify some 220 million US voters into 32 different personality types, which could then be targeted by the ads that most appealed to them[19].

Given the right amount of data, the method can be implemented in reverse; after collecting the same data from users who were not profiled by the survey as those who were surveyed, this data can be used as input for machine learning models that can then classify users who were not previously profiled into the personality groups mentioned above. Although the real success of Cambridge Analytica’s methods has not been clearly established, the moral, political and security concerns surrounding the company undoubtedly highlight both the potential of the use of online footprint data and the ways in which it can be used in ways that are legally unregulated or morally and ethically questionable.

Taken together, the above illustrates the potential lying in the use of the ever-increasing amount of data currently available on the internet. However, given that the so-called ‘data-driven economic model’ (where the primary source of profit is not industrial production, but peoples’ attention) is not yet fully developed, the ethical and legal concerns that have already been raised undoubtedly highlight the risks of further proliferation and refinement of AI-based technologies, leaving many questions unanswered.


Initiatives are already being taken to tackle these problems. For example, the European Union’s efforts to achieve digital sovereignty[20] seek to respond to the uneven distribution of artificial intelligence capacities (research, infrastructure) in the world, which is currently to the detriment of the Union. Significant progress has been made with the adoption of the GDPR in relation to the processing and use of personal data, but (as the above-mentioned report of the Irish Council for Civil Liberties and Justice reveals) it is far from clear that in practice what is an effective and appropriate way forward on issues that are not currently regulated and in terms of detecting abuse.

Given that the function of law is primarily to respond to social and technological changes that have already occurred by fine-tuning the regulatory environment, a comprehensive study of the problems related to AI from a legal perspective is also essential.

Another issue that is not discussed in detail in this article, but which is also of particular importance, is the question of the contrasts that the use of AI-based capacities concentrated in the hands of the state entails. Such capacities can be used both to defend liberal democracies and to build authoritarian (and/or surveillance) states, as the People’s Republic of China has done, for instance, by introducing a ‘social credit system’[21].

After examining the issues involved, perhaps the most important finding is the need to improve the regulations surrounding artificial intelligence, to update them to meet the challenges of the times, and to develop cyber defense procedures that can detect, predict and possibly prevent manipulative techniques using artificial intelligence.

[1] The quote refers to a common saying, especially in the United States, which emphasises the data-based dimension of economic growth: ‘Data is the new gold’. (e.g., Rachel Nyswander Thomas: Data is the New Gold: Marketing and Innovation in the New Economy (Accessed: 12. 22. 2022.)

[2] In addition, artificial intelligence can be used to amplify the effects of efforts to distort election results, such as gerrymandering, which are not really relevant to the topic of this paper. Cf. Manheim, Karl – Lyric, Kaplan: Artificial intelligence: Risks to privacy and democracy. Yale JL & Tech. 21, 2019 p. 133 – 135.

[3] Robert S. Mueller, III: Report on the Investigation Into Russian Interference in the 2016 Presidential Election:( Accessed: 12. 19. 2022.)

[4] (EU) 2016/679

[5] [5] Ryan, Johnny: Two years of DPC inaction on the ongoing RTB data breach – Submission to the Irish Data Protection Commission (21 September 2020):

[6] Ibid. 6-7.

[7] Ibid. 5.

[8] For example, ranking content in the newsfeed according to relevance and interests.

[9] Kaakinen, Markus –Sirola, Anu – Savolainen, Iina – Oksanen, Atte: Shared identity and shared information in social media: development and validation of the identity bubble reinforcement scale, Media Psychology, 23:1, 25-51, 2020, p. 25-26.

[10] Pariser, Eli: The filter bubble: What the Internet is hiding from you. London, England: Penguin, 2011

[11] Zollo, Fabiana – Bessi, Alessandro – Del Vicario, Michela – Scala, Antonio – Caldarelli, Guido – Shekhtman, Louis – Quattrociocchi, Walter: Debunking in a world of tribes. PloS ONE, 12(7), 2017

[12]Krysten Godfrey Maddocks: What is Psychometrics? How Assessments Help Make Hiring Decisions: (Accessed: 12. 22. 2022.)

[13] Vogel, Kenneth P. – Parti, Tarini: Cruz partners with donor’s ‘psychographic’ firm: (Accessed: 12. 22. 2022.)

[14] Doward, Jamie –Gibbs, Alice: Did Cambridge Analytica influence the Brexit vote and the US election? (Accessed: 12. 22. 2022.)

[15] Blakely, Rhys: Data scientists target 20 million new voters for Trump: (Accessed: 12. 22. 2022.)

[16] González, Roberto J.: Hacking the citizenry?: Personality profiling, ‘big data ‘and the election of Donald Trump. Anthropology Today 33.3, 2017, p. 9-12.

[17] The results could be evaluated according to the Big Five personality model, a long-established, fundamental concept in personality psychology research about the classification of an individual’s personality traits into factor groups. These main traits are extraversion, friendliness, conscientiousness, emotional stability, and culture/intellect.

[18] Harry Davis: Ted Cruz using firm that harvested data on millions of unwitting Facebook users: (Accessed: 12. 22. 2022.)

[19]  Confessore, Nicholas –Hakim, Danny: Data Firm Says ‘Secret Sauce’ Aided Trump; Many Scoff: (Accessed: 12. 22. 2022.)

[20] EPRS Ideas Paper – Towards a more resilient EU: Digital sovereignty for Europe: (Accessed: 12. 23. 2022.)

[21] Nicholas Wright: How Artificial Intelligence Will Reshape the Global Order – The Coming Competition Between Digital Authoritarianism and Liberal Democracy: (Accessed: 12. 23. 2022.)

István Üveges is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement. 

Vagelis PAPAKONSTANTINOU: The (New) Role of States in a ‘States-As-Platforms’ Approach

The forceful invasion of “online platforms” not only into our everyday lives but also into the EU legislator’s agenda, most visibly through the DSA and DMA regulatory initiatives, perhaps opened up another approach to state theory: what if states could also be viewed as platforms themselves? Within the current digital environment  online platforms are information structures that hold the role of information intermediaries, or even “gatekeepers”, among their users. What if a similar approach, that of an informational structure, was applied onto states as well? How would that affect their role under traditional state theory?

The ‘States-as-Platforms’ Approach

Under the current EU law approach, online platforms essentially “store and disseminate to the public information” (DSA, article 2). This broadly corresponds to the digital environment around us, accurately describing a service familiar to us all whereby an intermediary offers to the public an informational infrastructure (a “platform”) that stores data uploaded by a user and then, at the request of that same user, makes such data available to a wider audience, be it a closed circle of recipients or the whole wide world. In essence, the online platform is the necessary, medium to make this transaction possible.

Where do states fit in? Basically, states have held the role of information intermediaries for their citizens or subjects since the day any type of organised society emerged. Immediately at birth humans are vested with state-provided information: a name, as well as a specific nationality. Without these a person cannot exist. A nameless or stateless person is unthinkable in human societies. This information is subsequently further enriched within modern, bureaucratic states: education and employment, family status, property rights, taxation and social security are all information (co-)created by states and their citizens or subjects.

It is with regard to this information that the most important role of states as information brokers comes into play: states safely store and further disseminate it. This function is of paramount importance to individuals. To live their lives in any meaningful manner individuals need to have their basic personal data, first, safely stored for the rest of their lives and, second, transmittable in a validated format by their respective states. In essence, this is the most important and fundamental role of states taking precedence even from the provision of security. At the end of the day, provision of security is meaningless unless the state’s function as an information intermediary has been provided and remains in effect—that is, unless the state knows who to protect.

What Do Individuals Want?

If states are information brokers for their citizens or subjects what is the role of individuals? Are they simply passive actors, co-creating information within boundaries set by their respective states? Or do they assume a more active role? In essence, what does any individual really want?

Individuals want to maximise their information processing. This wish is shared by all, throughout human history. From the time our ancestors drew on caves’ walls and improved their food gathering skills to the Greco-Roman age, the Renaissance and the Industrial Revolution, humans basically always tried, and succeeded, to increase their processing of information, to maximise their informational footprint. Or in Van Doren’s wordsthe history of mankind is the history of the progress and development of human knowledge. Universal history […] is no other than an account of how mankind’s knowledge has grown and changed over the ages”.

At a personal level, if it is knowledge that one is after then information processing is the way of life that that person has chosen. Even a quiet life, however, would be unattainable if new information did not compensate for inevitable change around us. And, for those after wealth, what are riches other than access to more information? In essence, all of human life and human experience can be viewed as the sum of the information around us.

Similarly, man’s wish to maximise its information processing includes the need for security. Unless humans are and feel secure their information processing cannot be maximised. On the other hand, this is as far as the connection between this basic quest and human rights or politics goes: increase of information processing may assumedly be favoured in free and democratic states but this may not be necessarily so. Human history is therefore a long march not towards democracy, freedom, human rights or any other (worthy) purpose, but simply towards information maximization.

The Traditional Role of States Being Eroded by Online Platforms

Under traditional state theory states exist first and foremost for the provision of security to their citizens or subjects. As most famously formulated in Hobbes’ Leviathan, outside a sovereign state man’s life would be “nasty, brutish, and short” (Leviathan, XIII, 9). It is to avoid this that individuals, essentially under a social contract theory, decide to forego some of their freedoms and organise themselves into states. The politics that these states can form from that point on go into any direction, ranging from democracy to monarchy or oligarchy.

What is revealing, however, for the purposes of this analysis in Hobbes’ book is its frontispiece: In it, a giant crowned figure is seen emerging from the landscape, clutching a sword and a crosier beneath a quote from the Book of Job (Non est potestas Super Terram quae Comparetur ei / There is no power on earth to be compared to him). The torso and arms of the giant are composed of over three hundred persons all facing away from the viewer, (see the relevant Wikipedia text).

The giant is obviously the state, composed of its citizens or subjects. It provides security to them (this is after all Hobbes’ main argument and the book’s raison d être), however how is it able to do that? Tellingly, by staying above the landscape, by seeing (and knowing) all, by exercising total control over it.

Throughout human history information processing was state-exclusive. As seen, the only thing individuals basically want is to increase their processing of information. Nevertheless, from the ancient Iron Age Empires to Greek city-states, the Roman empire or medieval empires in the West and the East, this was done almost exclusively within states’ (or, empires’) borders. With a small exception (small circles of merchants, soldiers or priests who travelled around) any and all data processing by individuals was performed locally within their respective states: individuals created families, studied, worked and transacted within closed, physical borders. There was no way to transact cross-border without state intervention, and thus control, either in the form of physical border-crossing and relevant paperwork or import/export taxes or, even worse, mandatory state permits to even leave town. This was as much true in our far past as also recently until the early 1990s, when the internet emerged.

States were therefore able to provide security to their subjects or citizens because they controlled their information flows. They knew everything, from business transactions to personal relationships. They basically controlled the flow of money and people through control of the relevant information. They could impose internal order by using this information and could protect from external enemies by being able to mobilise resources (people and material) upon which they had total and complete control. Within a states-as-platforms context, they co-created the information with their citizens or subjects, but they retained total control over this information to themselves.

As explained in a recent MCC conference last November, online platforms have eroded the above model by removing exclusive control of information from the states’ reach. By now individuals transact over platforms by-passing mandatory state controls (borders, customs etc.) of the past. They study online and acquire certificates from organisations that are not necessarily nationally accredited or supervised. They create cross-national communities and exchange information or carry out common projects without any state involvement. They have direct access to information generated outside their countries’ borders, completely uncontrolled by their governments. States, as information brokers profiting from exclusivity in this role now face competition by platforms.

This fundamentally affects the frontispiece in Leviathan above. The artist has chosen all of the persons composing the giant to have no face towards the viewer, to face the state. This has changed by the emergence of online platforms: individuals now carry faces, and are looking outwards, to the whole wide world, that has suddenly been opened-up to each one of us, in an unprecedented twist in human history.

The New Role of States

If the generally accepted basic role of states as providers of security is being eroded by online platforms, what can their role be in the future? The answer lies perhaps within the context of their role as information intermediaries (a.k.a. platforms), taking also into account that what individuals really want is to maximise their information processing: states need to facilitate such information processing.

Enabling maximised information processing carries wide and varied consequences for modern states. Free citizens that are and feel secure within a rule of law environment are in a better position to increase their informational footprint. Informed and educated individuals are able to better process information than uneducated ones. Transparent and open institutions facilitate information processing whereas decision-making behind closed doors stands in its way. Similarly, information needs to be free, or at least, accessible under fair conditions to everybody. It also needs to remain secure, inaccessible to anybody without a legitimate interest to it. Informational self-determination is a by-product of informational maximisation. The list can go on almost indefinitely, assuming an informational approach to human life per se.

The above do not affect, at least directly, the primary role of states as security providers. Evidently, this task will (and needs to) remain a state monopoly. Same is the case with other state monopolies, such as market regulation. However, under a states-as-platforms lens new policy options are opened while older assumptions may need to be revisited. At the end of the day, under a “pursuit of happiness” point of view, if happiness ultimately equals increased information processing, then states need to, if not facilitate, then at least allow such processing to take place.

Vagelis Papakonstantinou is a professor at Vrije Universiteit Brussel (VUB) at LSTS (Law Science Technology and Society). His research focuses on personal data protection, both from an EU and an international perspective, with an emphasis on supervision, in particular Data Protection Authorities’ global cooperation. His other research topics include cybersecurity, digital personhood and software. He is also a registered attorney with the Athens and Brussels Bar Associations. Since 2016 he has been serving as a member (alternate) of the Hellenic Data Protection Authority, while previously served as a member of the Board of Directors of the Hellenic Copyright Organisation (2013-2016).

Mónika MERCZ: Privacy and Combatting Online Child Sexual Abuse – A Collision Course?

The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) adopted a Joint Opinion on the Proposal for a Regulation to prevent and combat child sexual abuse on the 29th of July, 2022. While this has not made huge waves in the public discourse, we must take a moment to discuss what this stance means for how we view data protection in relation to child protection, and specifically fighting against online child sexual abuse material (CSAM). The International Data Protection Day seems like a good occasion to contribute to this debate.

The Proposal’s aim was to impose obligations when it comes to detecting, reporting, removing and blocking known and new online CSAM. According to the Proposal, the EU Centre and Europol would work closely together, in order to transmit information regarding these types of crime. The EDPB and EDPS recommend that instead of giving direct access to data for the purposes of law enforcement, each case should be first assessed individually by entities in charge of applying safeguards intended to ensure that the data is processed lawfully. In order to mitigate the risk of data breaches, private operators and administrative or judicial authorities should decide if the processing is allowed.

While child sexual exploitation must be stopped, the EDPB stated that limitations to the rights to private life and data protection shall be upheld, thus only strictly necessary and proportionate information should be retained in these cases. The conditions for issuing a detection order for CSAM and child solicitation lack clarity and precision. This could unfortunately lead to generalised and indiscriminate scanning of content of virtually all types of electronic communications. But is our privacy’s safety truly worth the pain suffered by minors? Is it not already too late for our society to try to put privacy concerns first anyway? I believe that this issue is much more multifaceted than would seem at first glance.

There are additional concerns regarding the use of artificial intelligence to scan users’ communications, which could lead to erroneous conclusions. While human beings make mistakes too, the fact that AI is not properly regulated is a big issue. This fault in the system may potentially lead to several false accusations. EDPB and EDPS shared that in their opinion “encryption contributes in a fundamental way to the respect of private life and to the confidentiality of communications, freedom of expression, innovation and growth of the digital economy.” However, it must be noted that more than one million reports of CSAM happened in the European Union in 2020. The COVID-19 pandemic was undoubtedly a factor in the 64% rise in such reports in 2021 compared to the previous year. This is cause for concern, and should be addressed properly.

In light of these opposing views about the importance of individuals’ rights, I aim to find some semblance of balance. The real question is: how can we ensure that every child is protected from sexual exploitation, perpetrators are found and content is removed, while protecting ourselves from becoming completely transparent and vulnerable?

  1. Why should we fight against the online sexual exploitation of children?

First of all, I would like to point out how utterly vital it is to protect children from any form of physical, psychological or sexual abuse. Protecting children is not only a moral issue, but also the key to humanity’s future. I would like to provide some facts, underlined by mental health experts. We know that any form of child sexual exploitation has short-term effects including exhibiting regressive behavior; performance problems at school; and an unwillingness to participate in activities. Long-term effects include depression, anxiety-related behavior or anxiety, eating disorders, obesity, repression, sexual and relationship problems. These serious issues can affect people well into adulthood, culminating in a lower quality of life, thus enabling members of society to become less productive.

In addition to these serious psychological consequences, the fundamental rights of victims are infringed, such as the human rights to life, health, personal freedom and security, as well as their right not to be tortured or exposed to other inhuman, cruel or degrading treatment, as guaranteed by the UDHR and other international laws. In addition to the efforts made by countries that ratified the Convention on the Rights of the Child, I also must mention the United States Supreme Court decision and lower court decisions in United States v. Lanier. In this case we can see that in the US’s interpretation, sexual abuse violates a recognized right of bodily integrity as encompassed by the liberty interest protected by the 14th Amendment. Although this American finding dates back to 1997, it does not strip the statement from validity in our online world.

To speak about the legal framework governing the issue in my home country, Hungary, the Fundamental Law also protects the aforementioned rights. Article XV under “Freedom and Responsibility’” states that “(5) By means of separate measures, Hungary shall protect families, children, women, the elderly and those living with disabilities.” While this is an excellent level of protection, I would propose that we need to add the part “Hungary shall take measures to protect children from all forms of sexual exploitation”, or even if we do not add it into our constitution, we must make it a priority. Act XXXI. of 1997 on the Protection of Children and the Administration of Guardianship is simply not enough to help keep children safe against new forms of sexual abuse, in particular, online exploitation. With the dark web providing a place for abusers to hide behind, what options do we have to expose these predators and recover missing children?

A study explored a sample of 1,546 anonymous individuals who voluntarily responded to a survey when searching for child sexual abuse material on the dark web. 42% of the respondents said that they had sought direct contact with children through online platforms after viewing CSAM. 58% reported feeling afraid that viewing CSAM might lead to sexual acts with a child or adult. So we can see that the situation is indeed dire and needs a firm response on an EU level, or possibly even on a wider international level. Sadly, cooperation between countries with different legal systems is incredibly difficult, time-consuming and could also lead to violations of privacy as well as false accusations and unlawful arrests. This is where several of the concerns of EDPB and EDPS arose in addition to the data protection aspects mentioned before.

  1. Avoiding a Surveillance State

Having talked about the effects and frequency of child sexual abuse online, I have no doubt that the readers of this blog agree that drastic steps are needed to protect our most vulnerable. However, the issue is made difficult by the fear that data provided by honest people wishing to help catch predators could lead to data protection essentially losing its meaning. There are many dire consequences that could penetrate our lives if data protection were to metaphorically “fall”. It is enough to think about China’s Social Credit System and surveillance state, that is a prime example of what can happen if the members of society become transparent instead of the state. Uncontrolled access to anyone’s and everyone’s data under the guise of investigation into cases of online abuse could easily lead to surveillance capitalism getting stronger, our data becoming completely visible and privacy essentially ceasing to exist.

Right now, personal data is protected by several laws, namely the GDPR, and in Hungary, Act CXII of 2011 on the Right of Informational Self-Determination and on Freedom of Information. This law is upheld in particular through the work of the Hungarian National Authority for Data Protection and Freedom of Information. The Fundamental Law of Hungary also upholds the vital nature of data protection in its Article VI (3)[1] and (4)[2]. I advise our readers to take a look at the relevant legal framework themselves, but I shall focus on the pertinent data-protection aspects for the sake of this discussion.

There are several declarations by politicians and institutions alike that reinforce how essential this field of law is. This is of course especially true in the case of the European Union. As has been previously stated in one of our posts here on Constitutional Discourse, by Bianka Maksó and Lilla Nóra Kiss, the USA has a quite different approach. But can we justify letting children go through horrific trauma in order to protect our personal information? Which one takes precedence?

  1. A moral issue?

On the most basic level, we might believe that our information cannot be truly protected, so we might as well take a risk and let our data be scanned, if this is the price we must pay in order to protect others. But are we truly protecting anyone, if we are making every person on Earth more vulnerable to attacks in the process?

The Constitutional Court of Hungary has long employed the examination of necessity and proportionality in order to test which one of two fundamental rights need to be restricted if there is collision. I shall defer to their wisdom and aim to replicate their thought process in an incredibly simplified version – as is made necessary by the obvious limitations of a blog post. My wish is to hypothesize if we could justify infringement of data protection in the face of a child’s right to life/ development.

First of all, I shall examine if the restriction of our right to data protection is absolutely necessary. If the right of children not to suffer sexual exploitation online (which, again contains facets of their right to life, health, personal freedom and security, as well as their right not to be tortured or exposed to other inhuman, cruel or degrading treatment) can be upheld in any other – less intrusive but still proper – way other than giving up data protection, then restricting privacy is not necessary. While experts’ opinion leans towards the point of view that privacy must be upheld, I would like to respectfully try to see it from another side. Currently we are trying to implement measures to stop online child abuse in all its forms, but it yields few results. The issue is growing. Many claim that a form of cooperation between law enforcement, hackers, different countries and many other actors could lead to curbing this crime further. Could we ever completely stop it? Probably not. But could we uphold their right not to be tortured or exposed to other inhuman, cruel or degrading treatment and to a healthy development? Maybe.

I put forward the idea that at this point we have no other, more effective measure to stop online child sexual abuse other than restricting our own protection of personal data to a degree – child protection is a public interest and protecting our posterity also has constitutional value. Additionally, the Fundamental Law of Hungary contains in its Article I (3) that “(…) A fundamental right may only be restricted to allow the effective use of another fundamental right or to protect a constitutional value, to the extent absolutely necessary, proportionate to the objective pursued and with full respect for the essential content of that fundamental right.” As I have argued, child protection is undoubtedly of constitutional value and could warrant the restriction of data protection. On the other hand, the Constitutional Court of Hungary has established that privacy protection is also of constitutional value.[3]

As the second step of the test, based on my previous observations, I must wholeheartedly agree that data protection should only be restricted to the most indispensable extent. Because both of these issues are so intertwined and difficult to balance, we could have a new policy specifically for cases where CSAM is sought by looking into personal data. I firmly believe that this solution could be found, but it would require establishing new agencies that specifically deal with aspects of data protection when it comes to cases like this. The prevalence of this material on the Internet also makes it necessary for us to update laws which are about the relationship of privacy and recordings of CSAM.

I cannot think of a better alternative right now than a slight restriction of privacy, even with the added risks. The way things are progressing, with the added weight of the global pandemic, inflation, war and climate change will lead to more children being sold and used for gain on online platforms, which are often untraceable. Are we willing to leave them to their fate in the name of protecting society as a whole from possibly becoming more totalitarian? Are we on our way to losing privacy anyway?

These are all questions for the future generations of thinkers, who may just develop newer technologies and safer practices, which make balancing these two sides of human rights possible. Until then, I kindly advise everyone reading this article to think through the possible consequences of taking action in either direction. Hopefully, on the International Day of Data Protection, I could gauge your interest in a discussion which could lead to concrete answers and new policies all across the EU in the future.

Mónika MERCZ lawyer, is a PhD student in the Doctoral School of Law and Political Sciences at Károli Gáspár University of the Reformed Church, Budapest. A graduate of the University of Miskolc and former Secretary General of ELSA Miskolc, she currently works as a Professional Coordinator at Public Law Center of Mathias Corvinus Collegium. She is a Member of the Editorial Board at Constitutional Discourse blog.


[1] (3) Everyone shall have the right to the protection of his or her personal data, as well as to access and disseminate data of public interest.

[2] (4) The application of the right to the protection of personal data and to access data of public interest shall be supervised by an independent authority established by a cardinal Act.

[3] Hungarian Constitutional Court Decision 15/2022. (VII. 14.) [24]

Lilla Nóra KISS: Professional Ethics and Morality Can Prevent Social Media From Becoming Sovereign

When top Russian diplomat Maria Zakharova explains that George Orwell’s dystopian classic Nineteen Eighty-Four was written to describe the dangers of Western liberalism and not totalitarianism, we may feel as though we are watching an absurd Monty Python satire. In those parodies, artists question facts and overkill conversations with extreme statements to criticize an existing system and discourage the audience from becoming participants in the absurd comedy. While such plays used to primarily cater absurdity for entertainment purposes only, they are gradually starting to normalize the reality that has come of absurdity, which is definitely less enjoyable.

To substantiate this claim, our current reality can be broken down into multiple components of narration: the producers of a play can be viewed as analogous to the owners of media outlets and social media platforms, the directors to the censors, the main actors to the influencers—journalists, politicians, policymakers, and other public figures—and the members of the audience to the users of said media outlets or citizens.

As the reality slowly becomes absurd, members of the passively consuming audience become active participants of the play. Obviously, ownership makes profit-oriented decisions; the aim is to maximize the audience–and thus, the profit. The more extreme and negative the content is, the more people it reaches. The competition to become the most popular outlet slowly pushes the focus from professional, objective, and ethical information-sharing towards somewhat sensationalist content (also known as ‘clickbait’) as human beings struggle with the ‘limited rationality’ mindset, identified by Herbert A. Simon in 1947. Consequently, this competition promotes partially irrational decision-making capabilities. Essentially, humans have to make decisions based on the information available, but due to their cognitive and time limitations, people are vulnerable to the sources of information. Today, social media serves as a general source of news for Americans. According to the Pew Research Center’s survey conducted in January 2021, Facebook stands out as the regular source of news for Americans (54%), while a large portion of Twitter users regularly gets news on the site (59%). Since the resources and capacities are limited, platforms have the green light to filter and pre-digest the news for their users. The cherry-picked news comes from well-selected sources and is directly delivered to the users’ newsfeed. The filtered information behaves as a sub-threshold stimulus that unconsciously supports users’ interpretations of certain topics. Complemented by the content, the description of which lacks objectivity, users are easy targets of polarization. As a result, the demarcation lines between those who agree with a certain opinion and those who disagree become more acute.

In today’s age of surveillance capitalism – as Shoshana Zuboff named the “bloodless battle for power and profit as violent as any the world has seen” – the limit of professional ethics of journalists, politicians, and other influencers is a key question. Another significant point is the owners’ liability for intentionally (trans)forming public opinion. As Count István Széchenyi – often referred to as the Greatest Hungarian –famously expressed, “ownership and knowledge come with responsibilities.” In a world where all information is available on the internet and owners of digital platforms are free to decide what to show to or hide from the masses, ownership over information becomes the most powerful means to shape the future of society. There is no doubt that owners structure societies; the question is if they do it with moral observations or purely for their own financial benefits.

The former approach would be the idealistic scenario: it necessitates a social media environment where platforms’ owners do not intend to form the public opinion and therefore: (1) allow all forms of speech as free speech even if prone to expressing extremism, (2) users could pick and choose freely from millions of pre-generated information upon their consciously and explicitly preselected priorities (which obviously pushes the boundaries of limited human capacities and timeframes), and (3) would not tolerate or apply any cancel culture. This also inherently implies that (4) even personae non gratae — Latin for “people not welcome”— would be allowed to use these platforms even if their views are controversial to the views of the ownership and as such, considered undesirable on their platforms. This would also entail a lack of double standards and a state of objective fairness. At the same time, such an ideal form of social media management would not automatically excuse crossing certain thresholds, such as sharing hate speech content, child pornography, or any other criminal acts, as the platforms would still be legally obligated to take the necessary measures in enabling established public institutions to interact and restore the balance. By the conclusion of this description of the ideal social media platform, there should be no doubt that this utopian scenario does not currently exist.

In the latter case, however, without being overly pessimistic, the world becomes a worse place every day. In this sad but more realistic scenario, private entities are interested in playing with information and using readers’ limited rationality. As a result, owners can intentionally form a public opinion as a side effect of their profit maximization. Of course, the profit-oriented approach is the legitimate interest of corporations—there is nothing wrong with that – until profit maximalization happens in compliance with ethical and moral standards. However, this leads to a very interesting legal dilemma. On the one hand, corporate decisions on allowing or restricting content are legitimate based on Section 230 of the CDA. (The US law setting the standards for ‘decent communications’ since 1996.) However, on the other hand, that decision may lead to illegitimate consequences because corporations have no legitimate authority to act as sovereigns and to form, deform, or transform public opinion by using their power over information. To attain unmanipulated public opinion is, of course, unimaginable and unnecessary in general, but identifying the influencer is crucial. Yet, tracing the influencer is almost impossible in the virtual sphere, hence the question of accountability for these platforms.

Translating the situation into the language of legal theory, the debate is about the relationship between law and morals. Natural law theory holds that law should reflect moral reasoning and should be based on moral order, whereas the theory of legal positivism holds that there is no connection between law and moral order. A symbolic example that highlights the differences from a practical point of view is that Nazi Germany and the Stalinist Soviet Union – two infamous totalitarian regimes of the 20th century – were rule of law regimes from the context of a purely legal positivist interpretation. Under natural law, however, these states were not operating under the rule of law, and their laws were not valid due to the lack of morality of their content. This is the case because natural law requires morality to validate legal content, while legal positivism does not.

Reflecting the opposing views on the current issue, the legal positivist would raise the question, “what does the law say?” for the situation and would provide the ‘easy answer’: private entities, including the owners of social media platforms, are legally entitled to make discretionary decisions regarding the content they share or ban on their own platforms, regardless of the influence they exert over their users. However, natural law would require adding moral values of ‘good’ or ‘bad’, ‘right’ or ‘wrong’ to make an adequate evaluation and give a ‘legal’ or an ‘illegal’ answer. Does these private entities’ exercise of their freedom to influence their users by the content they share lead to legal or illegal outcomes? In addition, if technically an act is legal, does it constitute the use or misuse of corporate freedom under Section 230 CDA? In other words, does Section 230 license these corporations to shape public opinion? Is there any moral standard that the ownership should follow when making their private decisions based on Section 230, especially knowing that the decision may influence and manipulate users? Of course, it is difficult to measure morality as its levels are very relative. It is even more complicated to evaluate morality in the digital sphere. Yet even so, basal minimum moral standards would support both the ownership in making fair decisions and the creation of the most objective environment for the news cycle. Introducing content-neutral and impartial minimum standards based upon morality might therefore help shift the emphasis back to normalcy from a partisan path.

When the producers (owners) introduce moral principles to reach fairness, directors (censors) are free to manage their tasks within the frames of their professional ethics. The main players (the influencers: journalists, politicians, policymakers, and other public figures) are forced to serve the public interest instead of their own interests. The ownership has a huge responsibility for doing good within and for the society. Otherwise, the play becomes an absurd reality produced by quasi omnipotent owners, directed by unethical censors, and influenced by self-interested public figures.

Morality and law together can prevent social media ownership from becoming uncontested, illegitimate sovereigns. Saving the checks and balances to maintain a healthy balance between private and public is important. We could see what happens when the public-private balance is distorted. In communist dictatorships, private entities are weak compared to state actors and have an extremely narrow room for maneuver in their interest advocacy. In two famous communist countries, People’s Republic of China and the Russian Federation, privately-owned media is virtually non-existent: the balance is distorted and private actors are dependent upon public institutions. Dependency, in turn, leads towards toleration of oppression and ideology-based manipulation. As a result, absurd things ensue: the people of Russia may not know they have been invading Ukraine for roughly three months now. In China, Western “traditional” social media is geo-blocked and Chinese have their own platforms. Strong totalitarian states do not bear any private intervention to their decision-making. On the other hand, it is also a mistake when private actors overreach their competences and influence public opinion to serve their private interests.

There is evidence that most people prefer normalcy over extremes. That is good news. Normalcy requires people in the middle to keep a healthy balance between private interest and public interest. Professional ethics, morality, and ownership liability are able to prevent private entities from becoming the new sovereigns that influence alternative movies about digital absurdities.

Lilla Nóra KISS is a postdoctoral visiting scholar at Antonin Scalia Law School, George Mason University, Virginia. Lilla participates in the Hungary Foundation’s Liberty Bridge Program and conducts research in social media regulation and regulatory approaches. Formerly, Lilla was a senior counselor on EU legal affairs at the Ministry of Justice and she has been a researcher and lecturer at the University of Miskolc (Hungary), Institute of European and International Law for five years, where she taught European Union law. Lilla obtained her Ph.D. degree in 2019. The topic of the dissertation is the legal issues of the withdrawal of a Member State from the EU.

Her current research interests cover the legal dimensions of Brexit, the interpretation of the European Way of Life, and the perspectives towards social media regulation in the USA and in Europe.

Lilla Nóra KISS: Lex Facebook or Tax Facebook? Options beyond the self-regulation of IT companies

Tech giants might cause headaches… Especially, when we attempt to think simultaneously with the heads of consumers, states and tycoons. This post intends to reveal some dimensions of regulating social media from a legal point of view, or to present the proverbial ‘windmill fight’ against seemingly unapproachable powers.

An unregulated social media environment can lead to interesting debates in many legal dimensions. The reason is that the topic is full of unclear definitions and opaque relationships of different actors. Let’s start by saying that the original purpose and function of social media sites is to connect members of the community and provide them multiple options to keep in touch. The spread of the internet has made it possible to share (and receive) information everywhere and any time. Social media sites became more and more widely spread and offered options for sharing news besides personal information, too. Therefore, the content of the platforms became extended. Thus, the supply has expanded. Human nature (especially our visceral curiosity about the lives of others) has created an ever-widening consumer market for these expanding services. So we can say that demand has also increased. The rapid pace of development and gaining market share has encouraged social media providers to change, provide more, even with an increasing number of specialized platforms. In addition, more and more features have been incorporated into the system, thus improving utilization and – by now,

psychologically proven – strengthening addiction in the daily lives of many people. This effect and the “phenomenon” of social media is well illustrated in the Social Dilemma documentary and in Disconnect.

All these developments have taken place in such a way that each concept has not been clarified or defined, and the (minimum) regulation that seems necessary has not been developed.

The strange situation is that the real concepts of service and consideration and the relationship of the seller the buyer and the product in the world of social media are unclear. I mean that (as Richard Serra put it in a 1973 radio show about television commercials reaching millions) when a product is free, the question arises as to whether we are the product. I believe that Serra’s statement about television advertising is exponentially true in the digital age.

The question shall be asked that if we (consumers, users) are the products, then who the seller is. A social media provider? If the social media sites do not sell, are they just mediate between the products (users) and the real seller(s)? Today, in an increasingly conscious society, it is perhaps no longer news that our personal data (and all the information associated with us) represent economic (monetary) value, it is considered a commodity. (See some thoughts on the surveillance capitalism by Shoshana Zuboff.) It is valuable that buyers unknown to us(ers) can get to know us, including our habits and decision-trends. By knowing us, they have the opportunity to influence us and our decisions via targeted ads, based on our browsing habits and history…

The scientific and social concern behind the analysis is that support the measurement of the behavior, the result could be dangerous as could affect the masses of people even in a cross-border manner. The free (unregulated) use of the know-how of manipulating millions via the internet and applications – such as the social media apps – is more than concerning…

If Artificial Intelligence serves and supports the understanding of the measured behavior of millions, it gives hardly controllable power to those who have control over that information (Big Data). Thus, the result could lead anywhere. As a simple conclusion, we might become a target audience (or – as you like – victims) without even realizing it. The advantages and disadvantages of direct marketing have been exploited by the science for decades. However, with the rise of social media sites, the traditional methods and techniques of direct marketing could make users (consumers) vulnerable with unforeseen efficiency. Especially nowadays, the effect is no longer just an incentive to buy, but even political manipulation. (Sorry, this option for fabricated reality reminds me on the famous movie Wag the Dog from 1997…) And finally, this option of manipulation leads us to the general legal and the constitutional dimensions.

Firstly, it can be seen that we can even get caught up in the clarifications of the concepts. In the context of data profiling, the issues of privacy and related data-protection concerns immediately arise. For example, the data and information we provide may be used (against us). If users of social media sites became aware that their data and profile were of serious property value and the social media sites profit off them, would they provide those personal data (at all)? If they were willing to disclose their personal data and other information knowing the above, would they do so for free? It is clear that the data have real economic potential. Could personal data be considered property? Anyone who knows the consumer (user) can reach him/her, regardless of whether the content is intended (needed) by the user or whether he/she intended to receive e.g. political messages. So, there are those who have an economic interest in learning about personal information or reaching targeted individuals of a specific profile. Thus, in addition to data protection, ownership issues may also arise – in particular with regard to the disposition of the subject matter of the property. While I’m sure that considering “data as a subject of property” opens Pandora’s box in legal thinking, this question cannot be avoided when we are talking about social media regulation(s)… The data protection dimension opens up privacy issues, in which context the right to disconnect that has also become an important aspect since the 2000s. The significance of the right to disconnect has increased especially in the changing work-life environment due to COVID-19.

Clarification of concepts is also important from a consumer-protection point of view. It is necessary to examine the extent to which users could be regarded as consumers. Here, the legal protection to which they are entitled, might be different. The measurability of influence, the protection of minors, and the tightening of the legal and ethical framework of consumer manipulation are particularly important. An essential accessory to both data protection law and consumer protection law is an appropriate level of information provided for the user/consumer. The appropriate level of information includes the liability of leaders to raise the awareness to vital information important to the public that may be understood as a basic need for consumers. The right to be informed shall be treated as minimum requirement of the service providers’.

The constitutional dimension mentioned above is relevant in several respects. On the one hand, users are not only consumers, but mostly citizens of certain states. Democratic states and legitimately elected leaders typically have responsibility for informing citizens in real time about matters of public interest. Here, of course, one can reflect on how information is provided and analyze (and evaluate) certain conceptual elements and levels of matters of public interest, but by abstracting from the questions of detail, perhaps we can examine the “phenomenon” of social media as a whole. By the term phenomenon, I want to express the elusiveness of the social media. Social media used to serve to facilitate getting to know each other and rekindle old relationships. However, by now, social media is playing an active role in providing information by allowing news to be published and shared (disseminated). Public involvement is also important in the active protection of citizens, especially certain vulnerable target groups (e.g., minors).

However, the involvement of the state and politics appeared in a completely different dimension as we might have imagined. Previously, information spread through the press and classical channels of media. By now, politicians and the state institutions became active users of social media. However, the legal framework was not tightened at the same time. In particular, the role of the sites in disseminating information has grown in importance over the last few years (see e.g. Trump’s election as president, migration, the Brexit referendum, the COVID-19 pandemic, and the Trump-Biden “election war”). The fact is that political actors and state representatives have become active users of social media in order to share information. Here, however, the basic legal requirements of liability are lacking. Who is responsible for the shared content? Who needs to verify the verity of the content? Who is responsible for spreading false (fake) news? A free press comes with state guarantees and strict accountability rules in our modern democracies. However, what about social media?

In addition to civil liability, misinformation and even its criminal consequences (e.g. incitement to possible crimes, incitement against the community, hate speech and other hate crimes) are also important regulatory considerations. The other side of the constitutional dimension is freedom of speech and freedom of expression. There are well-established civil law rules for this, determining the limits of freedom of expression (e.g., violations of the right to privacy). There are also criminal law frameworks that primarily address the categories of defamation and hate crimes.

It could be seen that there are substantive public and private law aspects of this topic. Constitutional law, criminal law, civil law, data protection, and consumer protection all allow for a legal assessment of possible regulation of social media.

In procedural terms, however, we face serious shortcomings. On the one hand, the cross-border nature of the phenomenon raises questions of competence: who is entitled to regulate social media and by what means, to what extent, with what personal-territorial-temporal scope and who is entitled to control the effectiveness of regulation, deal with possible abuses and, finally, how could the rules be enforced.

Could a global phenomenon be addressed with local (state or regional) solutions? The European Union is ready to regulate social media. The Member States agreed that the regulation of the social media is necessary. Some, including Hungary and Poland, have a stricter approach to the issue (see concepts of a so-called Lex Facebook). In parallel with the EU, the regulation, accountability, and controllability of social media in the USA is of public and professional interest. Social media (and other online services) have so far operated in slightly different way on the two continents due to the GDPR. For example, imposes significantly stricter rules on the processing of personal data than U.S. federal and state-level solutions. Fragmented legal solutions always raise dilemmas related to efficiency and powers (or the fear of them).

However, the question arises as to whether globally operating platforms can be judged and regulated at a regional level without compromising consumer rights. By the latter, I mean, for example, the consumer protection aspects of geoblocking, which also raise the theoretical question of categorizing citizens in terms of access to content.

From now on, another question to be asked will be whether the regulatory framework can be effective from the top-down or just bottom-up. There are many areas of self-regulation in social media and there are soft legal solutions for other digital companies as well. Recent political events have highlighted that these are not necessarily effective solutions, self-regulation is only a smoothing away of the conflicts that might arise without effective solution. However, forcing a regulation at the state level could be risky and the efficiency could be questioned due to the cross-border nature of the platforms. Are there tools for regulation that could be borrowed from other areas in an inclusive manner?

Could tax sanctions work (e.g. a kind of ‘Tax Facebook’ instead of Lex Facebook?)? In my view, taxation (by states) could be risky without a common European tax scheme for digital companies. This is a rough road that has been impassable for decades because of competence issues of the EU and the Member States. However, EU tax-rules on dotcoms would allow social media providers to be taxed on the part of states (or the EU) and sanctioned through tax instruments if they wish to participate in informing citizens (and citizens) about news of public interest. I raise the issue of sanctions along the lines of spreading dis- or misinformation… Financial instruments could force social media to pay tax on its income from the use of citizens’ data if the platforms want to become an active information- and content-sharing site (and not just a site for connecting people). This could be interpreted as poking Facebook and other social media instead of punching them with strict rules… However, this is difficult to achieve in a fragmented framework with different tax rules and rates in different Member States who do not want to harmonise tax rules due to sovereignty claims. The common tax issue, even if only in the digital field, leads us to constitutional dilemmas. The ​​division and transfer of competences/powers is a sensitive area of EU law. The Gordian knot could be cut through with a cheap Chinese solution: geoblocking on social media. With that we could say: “Mischief Managed”!

The problem there is that both the European and American systems are designed to meet the needs of consumers (in trade and politics, too). Modern politics wants not only to serve citizens in this regard, but also to use social media interfaces for information or even campaigning…

All this requires the development of common minimum standards in the areas indicated above, both in substantive and procedural terms. The regulation and the taxation of the companies are seemingly not the best possible solutions… Maybe new solutions could be borrowed from other fields, in an inclusive manner… Well, the clarification of concepts can be a good starting point in that change!

Lilla Nóra KISS is a counselor on EU legal affairs at the Ministry of Justice. Formerly, she has been a researcher and lecturer at the University of Miskolc (Hungary), Institute of European and International Law for five years, where she taught European Union law.

Lilla obtained her Ph.D. degree in 2019. The topic of the dissertation is the legal issues of the withdrawal of a Member State from the EU.

Her current research interests cover the legal dimensions of Brexit, the interpretation of the European Way of Life, and the Digital Single Market.

Márton SULYOK: Size Does Matter (?!)

Some European Debates on the Use of Religious Symbols in the Workplace

On 25 February 2021, Athanasios Rantos AG (CJEU) has issued an opinion in two German PRPs (preliminary ruling procedures) on whether an employer’s internal ‘neutrality policy’ can at all prohibit the wearing of large religious symbols, while being more lenient regarding smaller, more modest ones. These neutrality rules obviously encompass all forms of clothing and ‘office wear’ not only specifically identifiable religious symbols as such regulations normally extend to all forms of political, religious and ‘world views’. (Please note: The opinion is not yet available in English – below, I’ll limit myself to the Hungarian text and its translations)

Constitutional scholarship, free speech advocates and public opinion all approach this topic with care from many aspects, especially with regard to the wearing of such symbols in various public spaces, from universities and educational institutions to public administration offices. Thus, stances regarding “religious dress” (broadly speaking) divide Europe and a multitude of national practices and constitutional, legal rules have become public knowledge due to the jurisprudence of the European Court of Human Rights (Court, ECtHR). One among these decisions has been the famous landmark of Leyla Sahin v. Turkey (2005), wherein the Court has made the following statement (para. 35), evaluating the relationship of secularism and freedom of religion on the basis of identity. They argued that “[t]hose in favour of the headscarf see wearing it as a duty and/or a form of expression linked to religious identity.” In other words there is a right to the respect of religious symbols that is inherent to freedom of religion on an identitarian basis. The fact that the Court alludes to a “form of expression” linked to this identity transfers the discourse into the realm of free speech, which further complicates how we might interpret rules that limit wearing religious dress or similar symbols. (With these in mind, Erica Howard, a legal expert of the European Commission has looked at these issues in her 2017 study in a narrower European context, tailored to the EU, also looking at national practices.)

The two German cases (C-341/19 and C-804/18) providing the grounds for the AG opinion mentioned above, now touch upon similar issues under German law in light of EU law regarding the wearing of an Islamic headscarf under the neutrality rules of two companies (a drugstore operator and an association in charge of maintaining kindergartens). The EU law in question, regarding which the preliminary questions of the two German labor courts were raised is the 2007/78 EU (nominally EC) Directive regarding establishing a general framework for equal treatment in employment and occupation.

According to the AG’s opinion, relevant restrictions in employers’ internal regulations in this regard to not realize discrimination if related to any manifestations of employees political, religious or other world views. (This is based on previous cases such as Achbita or Bougnaoui.) However, this argument should be brought further in relation to the visible wearing of any symbols pertinent to the above, in the instant case, religious symbols. After visibility has been dealt with in CJEU practice in the famous G4S case, the focus visibly shifted to their size and ‘conspicuousness’. The AG opinion held that in the instant cases restrictions affected the ‘office wear’ in terms of any signs of religious views visible to third persons, clearly referring to this rule as part of maintaining client-relations. (para 51-52.) At this point, the opinion underlined that the current CJEU jurisprudence does not directly entail in cases similar to the one at hand that discrimination could not be established regarding rules banning the wear of Islamic headscarves. (para. 56.)

In the second part of the opinion, paras. 71-76 contain some key arguments that need to be emphasized. It was argued by the AG that the CJEU did not yet decide on the issue of rules banning the wearing of large symbols of political, religious or other world views, and that this logically means that the following issue needs to be examined: whether small-sized symbols can in fact be worn in the workplace in a visible manner. The AG refers here to Eweida and others, a case decided in 2013 by the ECtHR, where modesty did suit the context of declaring a violation of Article 9 ECHR. In Eweida, the respondent UK was found violating the Convention for sanctioning modest religious symbols otherwise unsuitable to tarnish the professional image of the wearers. Consequently, the following argument is made: employers’ neutrality policies – in the context of their client relations – are not inconsistent with their employees wearing small, modest religious symbols that are not detectable at first sight. Here it is argued that size does matter, as the AG is of the opinion that small symbols cannot insult those clients of the company that do not share in the religion or views of the employees wearing them.

Cutting back to visibility and the relevant ban, the AG states that if visible signs can be lawfully banned under G4S, then based on freedom to conduct a business (under the Charter), the employers are free to explicitly and exclusively ban the wearing of big symbols. So size does matter (?), but the only real question is who is to say what is big or small. The AG is of the opinion that it shall not be the CJEU, such assessment is (duly) deferred to national courts based on the totality of circumstances, also accounting for the environment in which the symbols are worn. One thing is for sure, though: the size of the Islamic headscarf is not small.

What also is not small is the number of contradictions in the opinion, considers Martijn van den Brink’s in his latest post on Verfassungsblog.The only positive aspect” of the opinion, he writes, deals with specifying the relationship of national constitutional and European rules protecting freedom of religion, and in this context the opinion looks at the issue whether national constitutional law rules protecting the freedom of religion can be interpreted as “provisions which are more favourable to the protection of the principle of equal treatment than those laid down in this Directive” in light of Article 8 of the Directive. The AG’s conclusions that they cannot. In this context, it was also raised whether internal rules of the employer in C-341/19 are superseded by the rules of the national constitution, which might have priority over neutrality policies based on the freedom to conduct business in reliance on the standards set by Sahin (where wearing the headscarf was considered by the employee as a religious duty).

Regarding C-804/18, the AG’s opinion sets forth that the German Federal Constitutional Court (GFCC) emphasized that the freedom to conduct a business under the Charter can no longer be assigned priority over freedom of religion in all cases, specifically where neutrality is imposed by the employer in client interactions, but the lack of such neutrality would not lead to any economic disadvantage. Based thereon, the GFCC emphasized that situations created by employers’ intent to abide client requests resulting in services being rendered by an employee not wearing an Islamic headscarf does not fall under the “genuine and determining characteristic” rule of Article 4 of the Directive. In the AG’s opinion, it is not contrary to the Directive if a national court applies the provisions of the national constitution to examine internal regulations of a private company prohibiting the wearing of symbols referring to political, religious or other world views, but any occurrence of discrimination should be duly assessed by the national courts as well.

Regardless of what arguments the CJEU’s judgment will finally rely on (given that the AG’s opinion is non-binding on the Court) AG Rantos’ opinion surely adds another layer to the European debates on wearing or displaying religious symbols and the right to respect these symbols as “forms of expression” tied to one’s identity under the afore-mentioned Sahin judgment of the ECtHR.

If we shift the identity-focus of the arguments from the individual to the state, France comes to mind. The mystery of Laity (laïcité) – often misunderstood abroad – is an inherent element of French constitutional identity – i.e. the constitutional principle of secularism – and defines many aspects of the operation of the State. Debates resulting from the above questions have started much longer ago here than elsewhere in Europe. Legal debates in concrete cases have surfaced in terms of crucifixes in classrooms, or modest religious symbols hanging in the necks of parent chaperones on a class outing, but the Islamic headscarf (foulard) – just as in the above German cases – and the full-body veil (voile intégral) were all affected in court battles and ensuing legislation. Besides lawmakers, the Constitutional Council and the Council of State have both said – sometimes with different points of view – their parts as early as 1989 in the infamous case of the “Foulards de Creil”, or then in 2010 regarding the constitutionality of a ban on wearing full-body veils in public spaces for public safety reasons.

Most recently, the 2020 projet loi (draft bill) on reinforcing the principles of the Republic contains several provisions regarding freedom of religion that expressly originate in the 1905 law that codified the separation of Church and State, thus introducing Laity into French constitutional tradition but in its original form containing no limitations on the “porte des signes”, the wearing of (religious) symbols. Prohibition only surfaced later on, in different but familiar contexts: at first, in education (cf. Sahin). 1882 was the year when the law on a laicized education was born, until a 2004 law has prohibited the wearing of all “ostensibly manifest” (i.e. clearly visible) signs, logically leading to a more lenient approach toward more discreet, modest (small-sized?) signs. In relation to the workplace and employment, the 2020 draft bill, for example, outlines that in the course of providing public services, the principles of neutrality and laity need to be observed. In terms of private companies, if their employees engage in client relations, restrictions similar to the ones mentioned in the German cases can be applied, but – if a private company should provide a public service by law, it shall also observe rules regarding neutrality and laity, according to the draft bill. Obviously, the legislative debate of this draft bill is not over yet, nothing is set in stone, the Senate shall decide on it sometime in May this year, but until then, scholars are left to work with this concept.

Based on the events of the past weeks, we can conclude that the size of the debate is expected to grow, and it does matter which previously resolved issues will gain new interpretations in light of new approaches. Based on the above, it is not at all evident that the wearing of religious symbols is a one way street, a matter of principle manifesting itself in forms of expression tied to one’s identity or rather – based on the German cases described above – “size does matter” and in some contexts there is no “one-size-fits-all” solution.

Márton SULYOK, lawyer (PhD in law and political sciences), legal translator. As a graduate of the University of Szeged, he has been working for his alma mater in different academic and management positions since 2007. He is currently a Senior Lecturer in Constitutional Law and Human Rights at the Institute of Public Law of the Faculty of Law and Political Sciences (Szeged), and the head of the Public Law Center at MCC (Mathias Corvinus Collegium) in Budapest. He previously worked for the Ministry of Justice in Budapest and has been an Alternate Member on the Management Board of the Fundamental Rights Agency of the European Union (2015-2020). E-mail: