Mónika MERCZ: Your DNA is the key to cold cases

Do you listen to true crime podcasts? If so, you have probably heard about cold cases such as the Boy in the Box, the Lady of the dunes and Opelika Jane Doe. The common feature of these cases? These victims have all been identified using genetic genealogy (DNA) – but the story is not as simple as you might think. Especially seeing that your curiousity about what your ethnic background is like is what helps DNA testing companies create huge banks of DNA from your samples. Do you think they keep these samples to themselves? Do you think solving crimes through DNA testing has nothing to do with at-home DNS testing and companies such as Ancestry or 23andMe? You might be surprised if you dig a little deeper.

Your genetic material provides invaluable information to those handling it, and while solving these cold cases and catching perpetrators is seen as worth the price we pay, maybe paying with unrestricted access to our DNA is not as proportional to the goal we want to reach as you might think.

While many companies try to develop further technologies to unlock our DNA, and DNA testing companies such as Ancestry, 23andMe and MyHeritage make it possible for us to figure out decade-long disappearances, we forget that law enforcement, private companies and many more interested parties now know everything about us – without our consent. Our aim was just to help alleviate the pain of unbelievable tragedies, but now we have given up information about our being as a whole, in addition to our family tree, our diseases and our relatives. This is especially frightening, seeing that our society as a whole could become much more transparent – and possibly segregated, if we were to give out our genetic makeup. How could we stop this unrestricted access to our DNA from causing problems while keeping the best parts of DNA databases? I believe the answer lies in stronger regulations that protect sensitive data, especially in the US, where many of the headquarters of these private DNA testing companies are, and where data protection is given less attention than for example in the European Union.

The GDPR, an Act governing data protection across the European Union, is not named as applicable in the privacy policies of many DNA testing companies. We might think that there is additional protection in the US – but actually those laws are significantly weaker, making the infringement of this kind of sensitive personal data more easy. It may get into the hands of insurance companies, who will know about your state of health, or law enforcement might use it to identify one of your loved ones as committing a crime. While combating heinous atrocities is a noble goal, are we ready for anyone to see our ethnic background, our genetic diseases? Are we ready to live transparently, stripped bare in the name of justice? Focusing more on the data protection aspects is more important now than ever. If we wish to have our cake and eat it too – provide justice for the dead and protect the living, we should consider what measures need to be taken. It is imperative that companies providing DNA testing as well as law enforcement should be more clear about the timeframe and manner of handling genetic material. Open communication and strict regulations are necessary. We cannot and should not ask families of loved ones to give up hope of finding out what happened, but we also cannot afford to become transparent and risk possible horrible consequences . I advise every lover of true crime, every empathetic person who wishes to see crimes solved and all of us, who are curious about our genetic background to take a step back and advocate for laws which protect us, while giving law enforcement the opportunity to do their job – without any mishandling.

New, stricter laws and bodies should be set up, as we progress towards a society, where no misdeed will be left uncovered, so that we, the innocent may keep seeing crimes being discovered without our very makeup meeting the same fate.

Mónika Mercz, JD, specialized in English legal translation, Professional Coordinator at the Public Law Center of Mathias Corvinus Collegium Foundation while completing a PhD in Law and Political Sciences at the Károli Gáspár University of the Reformed Church in Budapest, Hungary. Mónika’s past and present research focuses on constitutional identity in EU member states, data protection aspects of DNA testing, environment protection, children’s rights and Artificial Intelligence.
Email: mercz.monika@mcc.hu

Vagelis PAPAKONSTANTINOU: The (New) Role of States in a ‘States-As-Platforms’ Approach

The forceful invasion of “online platforms” not only into our everyday lives but also into the EU legislator’s agenda, most visibly through the DSA and DMA regulatory initiatives, perhaps opened up another approach to state theory: what if states could also be viewed as platforms themselves? Within the current digital environment  online platforms are information structures that hold the role of information intermediaries, or even “gatekeepers”, among their users. What if a similar approach, that of an informational structure, was applied onto states as well? How would that affect their role under traditional state theory?

The ‘States-as-Platforms’ Approach

Under the current EU law approach, online platforms essentially “store and disseminate to the public information” (DSA, article 2). This broadly corresponds to the digital environment around us, accurately describing a service familiar to us all whereby an intermediary offers to the public an informational infrastructure (a “platform”) that stores data uploaded by a user and then, at the request of that same user, makes such data available to a wider audience, be it a closed circle of recipients or the whole wide world. In essence, the online platform is the necessary, medium to make this transaction possible.

Where do states fit in? Basically, states have held the role of information intermediaries for their citizens or subjects since the day any type of organised society emerged. Immediately at birth humans are vested with state-provided information: a name, as well as a specific nationality. Without these a person cannot exist. A nameless or stateless person is unthinkable in human societies. This information is subsequently further enriched within modern, bureaucratic states: education and employment, family status, property rights, taxation and social security are all information (co-)created by states and their citizens or subjects.

It is with regard to this information that the most important role of states as information brokers comes into play: states safely store and further disseminate it. This function is of paramount importance to individuals. To live their lives in any meaningful manner individuals need to have their basic personal data, first, safely stored for the rest of their lives and, second, transmittable in a validated format by their respective states. In essence, this is the most important and fundamental role of states taking precedence even from the provision of security. At the end of the day, provision of security is meaningless unless the state’s function as an information intermediary has been provided and remains in effect—that is, unless the state knows who to protect.

What Do Individuals Want?

If states are information brokers for their citizens or subjects what is the role of individuals? Are they simply passive actors, co-creating information within boundaries set by their respective states? Or do they assume a more active role? In essence, what does any individual really want?

Individuals want to maximise their information processing. This wish is shared by all, throughout human history. From the time our ancestors drew on caves’ walls and improved their food gathering skills to the Greco-Roman age, the Renaissance and the Industrial Revolution, humans basically always tried, and succeeded, to increase their processing of information, to maximise their informational footprint. Or in Van Doren’s wordsthe history of mankind is the history of the progress and development of human knowledge. Universal history […] is no other than an account of how mankind’s knowledge has grown and changed over the ages”.

At a personal level, if it is knowledge that one is after then information processing is the way of life that that person has chosen. Even a quiet life, however, would be unattainable if new information did not compensate for inevitable change around us. And, for those after wealth, what are riches other than access to more information? In essence, all of human life and human experience can be viewed as the sum of the information around us.

Similarly, man’s wish to maximise its information processing includes the need for security. Unless humans are and feel secure their information processing cannot be maximised. On the other hand, this is as far as the connection between this basic quest and human rights or politics goes: increase of information processing may assumedly be favoured in free and democratic states but this may not be necessarily so. Human history is therefore a long march not towards democracy, freedom, human rights or any other (worthy) purpose, but simply towards information maximization.

The Traditional Role of States Being Eroded by Online Platforms

Under traditional state theory states exist first and foremost for the provision of security to their citizens or subjects. As most famously formulated in Hobbes’ Leviathan, outside a sovereign state man’s life would be “nasty, brutish, and short” (Leviathan, XIII, 9). It is to avoid this that individuals, essentially under a social contract theory, decide to forego some of their freedoms and organise themselves into states. The politics that these states can form from that point on go into any direction, ranging from democracy to monarchy or oligarchy.

What is revealing, however, for the purposes of this analysis in Hobbes’ book is its frontispiece: In it, a giant crowned figure is seen emerging from the landscape, clutching a sword and a crosier beneath a quote from the Book of Job (Non est potestas Super Terram quae Comparetur ei / There is no power on earth to be compared to him). The torso and arms of the giant are composed of over three hundred persons all facing away from the viewer, (see the relevant Wikipedia text).

The giant is obviously the state, composed of its citizens or subjects. It provides security to them (this is after all Hobbes’ main argument and the book’s raison d être), however how is it able to do that? Tellingly, by staying above the landscape, by seeing (and knowing) all, by exercising total control over it.

Throughout human history information processing was state-exclusive. As seen, the only thing individuals basically want is to increase their processing of information. Nevertheless, from the ancient Iron Age Empires to Greek city-states, the Roman empire or medieval empires in the West and the East, this was done almost exclusively within states’ (or, empires’) borders. With a small exception (small circles of merchants, soldiers or priests who travelled around) any and all data processing by individuals was performed locally within their respective states: individuals created families, studied, worked and transacted within closed, physical borders. There was no way to transact cross-border without state intervention, and thus control, either in the form of physical border-crossing and relevant paperwork or import/export taxes or, even worse, mandatory state permits to even leave town. This was as much true in our far past as also recently until the early 1990s, when the internet emerged.

States were therefore able to provide security to their subjects or citizens because they controlled their information flows. They knew everything, from business transactions to personal relationships. They basically controlled the flow of money and people through control of the relevant information. They could impose internal order by using this information and could protect from external enemies by being able to mobilise resources (people and material) upon which they had total and complete control. Within a states-as-platforms context, they co-created the information with their citizens or subjects, but they retained total control over this information to themselves.

As explained in a recent MCC conference last November, online platforms have eroded the above model by removing exclusive control of information from the states’ reach. By now individuals transact over platforms by-passing mandatory state controls (borders, customs etc.) of the past. They study online and acquire certificates from organisations that are not necessarily nationally accredited or supervised. They create cross-national communities and exchange information or carry out common projects without any state involvement. They have direct access to information generated outside their countries’ borders, completely uncontrolled by their governments. States, as information brokers profiting from exclusivity in this role now face competition by platforms.

This fundamentally affects the frontispiece in Leviathan above. The artist has chosen all of the persons composing the giant to have no face towards the viewer, to face the state. This has changed by the emergence of online platforms: individuals now carry faces, and are looking outwards, to the whole wide world, that has suddenly been opened-up to each one of us, in an unprecedented twist in human history.

The New Role of States

If the generally accepted basic role of states as providers of security is being eroded by online platforms, what can their role be in the future? The answer lies perhaps within the context of their role as information intermediaries (a.k.a. platforms), taking also into account that what individuals really want is to maximise their information processing: states need to facilitate such information processing.

Enabling maximised information processing carries wide and varied consequences for modern states. Free citizens that are and feel secure within a rule of law environment are in a better position to increase their informational footprint. Informed and educated individuals are able to better process information than uneducated ones. Transparent and open institutions facilitate information processing whereas decision-making behind closed doors stands in its way. Similarly, information needs to be free, or at least, accessible under fair conditions to everybody. It also needs to remain secure, inaccessible to anybody without a legitimate interest to it. Informational self-determination is a by-product of informational maximisation. The list can go on almost indefinitely, assuming an informational approach to human life per se.

The above do not affect, at least directly, the primary role of states as security providers. Evidently, this task will (and needs to) remain a state monopoly. Same is the case with other state monopolies, such as market regulation. However, under a states-as-platforms lens new policy options are opened while older assumptions may need to be revisited. At the end of the day, under a “pursuit of happiness” point of view, if happiness ultimately equals increased information processing, then states need to, if not facilitate, then at least allow such processing to take place.

Vagelis Papakonstantinou is a professor at Vrije Universiteit Brussel (VUB) at LSTS (Law Science Technology and Society). His research focuses on personal data protection, both from an EU and an international perspective, with an emphasis on supervision, in particular Data Protection Authorities’ global cooperation. His other research topics include cybersecurity, digital personhood and software. He is also a registered attorney with the Athens and Brussels Bar Associations. Since 2016 he has been serving as a member (alternate) of the Hellenic Data Protection Authority, while previously served as a member of the Board of Directors of the Hellenic Copyright Organisation (2013-2016).

Mónika MERCZ: Privacy and Combatting Online Child Sexual Abuse – A Collision Course?

The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) adopted a Joint Opinion on the Proposal for a Regulation to prevent and combat child sexual abuse on the 29th of July, 2022. While this has not made huge waves in the public discourse, we must take a moment to discuss what this stance means for how we view data protection in relation to child protection, and specifically fighting against online child sexual abuse material (CSAM). The International Data Protection Day seems like a good occasion to contribute to this debate.

The Proposal’s aim was to impose obligations when it comes to detecting, reporting, removing and blocking known and new online CSAM. According to the Proposal, the EU Centre and Europol would work closely together, in order to transmit information regarding these types of crime. The EDPB and EDPS recommend that instead of giving direct access to data for the purposes of law enforcement, each case should be first assessed individually by entities in charge of applying safeguards intended to ensure that the data is processed lawfully. In order to mitigate the risk of data breaches, private operators and administrative or judicial authorities should decide if the processing is allowed.

While child sexual exploitation must be stopped, the EDPB stated that limitations to the rights to private life and data protection shall be upheld, thus only strictly necessary and proportionate information should be retained in these cases. The conditions for issuing a detection order for CSAM and child solicitation lack clarity and precision. This could unfortunately lead to generalised and indiscriminate scanning of content of virtually all types of electronic communications. But is our privacy’s safety truly worth the pain suffered by minors? Is it not already too late for our society to try to put privacy concerns first anyway? I believe that this issue is much more multifaceted than would seem at first glance.

There are additional concerns regarding the use of artificial intelligence to scan users’ communications, which could lead to erroneous conclusions. While human beings make mistakes too, the fact that AI is not properly regulated is a big issue. This fault in the system may potentially lead to several false accusations. EDPB and EDPS shared that in their opinion “encryption contributes in a fundamental way to the respect of private life and to the confidentiality of communications, freedom of expression, innovation and growth of the digital economy.” However, it must be noted that more than one million reports of CSAM happened in the European Union in 2020. The COVID-19 pandemic was undoubtedly a factor in the 64% rise in such reports in 2021 compared to the previous year. This is cause for concern, and should be addressed properly.

In light of these opposing views about the importance of individuals’ rights, I aim to find some semblance of balance. The real question is: how can we ensure that every child is protected from sexual exploitation, perpetrators are found and content is removed, while protecting ourselves from becoming completely transparent and vulnerable?

  1. Why should we fight against the online sexual exploitation of children?

First of all, I would like to point out how utterly vital it is to protect children from any form of physical, psychological or sexual abuse. Protecting children is not only a moral issue, but also the key to humanity’s future. I would like to provide some facts, underlined by mental health experts. We know that any form of child sexual exploitation has short-term effects including exhibiting regressive behavior; performance problems at school; and an unwillingness to participate in activities. Long-term effects include depression, anxiety-related behavior or anxiety, eating disorders, obesity, repression, sexual and relationship problems. These serious issues can affect people well into adulthood, culminating in a lower quality of life, thus enabling members of society to become less productive.

In addition to these serious psychological consequences, the fundamental rights of victims are infringed, such as the human rights to life, health, personal freedom and security, as well as their right not to be tortured or exposed to other inhuman, cruel or degrading treatment, as guaranteed by the UDHR and other international laws. In addition to the efforts made by countries that ratified the Convention on the Rights of the Child, I also must mention the United States Supreme Court decision and lower court decisions in United States v. Lanier. In this case we can see that in the US’s interpretation, sexual abuse violates a recognized right of bodily integrity as encompassed by the liberty interest protected by the 14th Amendment. Although this American finding dates back to 1997, it does not strip the statement from validity in our online world.

To speak about the legal framework governing the issue in my home country, Hungary, the Fundamental Law also protects the aforementioned rights. Article XV under “Freedom and Responsibility’” states that “(5) By means of separate measures, Hungary shall protect families, children, women, the elderly and those living with disabilities.” While this is an excellent level of protection, I would propose that we need to add the part “Hungary shall take measures to protect children from all forms of sexual exploitation”, or even if we do not add it into our constitution, we must make it a priority. Act XXXI. of 1997 on the Protection of Children and the Administration of Guardianship is simply not enough to help keep children safe against new forms of sexual abuse, in particular, online exploitation. With the dark web providing a place for abusers to hide behind, what options do we have to expose these predators and recover missing children?

A study explored a sample of 1,546 anonymous individuals who voluntarily responded to a survey when searching for child sexual abuse material on the dark web. 42% of the respondents said that they had sought direct contact with children through online platforms after viewing CSAM. 58% reported feeling afraid that viewing CSAM might lead to sexual acts with a child or adult. So we can see that the situation is indeed dire and needs a firm response on an EU level, or possibly even on a wider international level. Sadly, cooperation between countries with different legal systems is incredibly difficult, time-consuming and could also lead to violations of privacy as well as false accusations and unlawful arrests. This is where several of the concerns of EDPB and EDPS arose in addition to the data protection aspects mentioned before.

  1. Avoiding a Surveillance State

Having talked about the effects and frequency of child sexual abuse online, I have no doubt that the readers of this blog agree that drastic steps are needed to protect our most vulnerable. However, the issue is made difficult by the fear that data provided by honest people wishing to help catch predators could lead to data protection essentially losing its meaning. There are many dire consequences that could penetrate our lives if data protection were to metaphorically “fall”. It is enough to think about China’s Social Credit System and surveillance state, that is a prime example of what can happen if the members of society become transparent instead of the state. Uncontrolled access to anyone’s and everyone’s data under the guise of investigation into cases of online abuse could easily lead to surveillance capitalism getting stronger, our data becoming completely visible and privacy essentially ceasing to exist.

Right now, personal data is protected by several laws, namely the GDPR, and in Hungary, Act CXII of 2011 on the Right of Informational Self-Determination and on Freedom of Information. This law is upheld in particular through the work of the Hungarian National Authority for Data Protection and Freedom of Information. The Fundamental Law of Hungary also upholds the vital nature of data protection in its Article VI (3)[1] and (4)[2]. I advise our readers to take a look at the relevant legal framework themselves, but I shall focus on the pertinent data-protection aspects for the sake of this discussion.

There are several declarations by politicians and institutions alike that reinforce how essential this field of law is. This is of course especially true in the case of the European Union. As has been previously stated in one of our posts here on Constitutional Discourse, by Bianka Maksó and Lilla Nóra Kiss, the USA has a quite different approach. But can we justify letting children go through horrific trauma in order to protect our personal information? Which one takes precedence?

  1. A moral issue?

On the most basic level, we might believe that our information cannot be truly protected, so we might as well take a risk and let our data be scanned, if this is the price we must pay in order to protect others. But are we truly protecting anyone, if we are making every person on Earth more vulnerable to attacks in the process?

The Constitutional Court of Hungary has long employed the examination of necessity and proportionality in order to test which one of two fundamental rights need to be restricted if there is collision. I shall defer to their wisdom and aim to replicate their thought process in an incredibly simplified version – as is made necessary by the obvious limitations of a blog post. My wish is to hypothesize if we could justify infringement of data protection in the face of a child’s right to life/ development.

First of all, I shall examine if the restriction of our right to data protection is absolutely necessary. If the right of children not to suffer sexual exploitation online (which, again contains facets of their right to life, health, personal freedom and security, as well as their right not to be tortured or exposed to other inhuman, cruel or degrading treatment) can be upheld in any other – less intrusive but still proper – way other than giving up data protection, then restricting privacy is not necessary. While experts’ opinion leans towards the point of view that privacy must be upheld, I would like to respectfully try to see it from another side. Currently we are trying to implement measures to stop online child abuse in all its forms, but it yields few results. The issue is growing. Many claim that a form of cooperation between law enforcement, hackers, different countries and many other actors could lead to curbing this crime further. Could we ever completely stop it? Probably not. But could we uphold their right not to be tortured or exposed to other inhuman, cruel or degrading treatment and to a healthy development? Maybe.

I put forward the idea that at this point we have no other, more effective measure to stop online child sexual abuse other than restricting our own protection of personal data to a degree – child protection is a public interest and protecting our posterity also has constitutional value. Additionally, the Fundamental Law of Hungary contains in its Article I (3) that “(…) A fundamental right may only be restricted to allow the effective use of another fundamental right or to protect a constitutional value, to the extent absolutely necessary, proportionate to the objective pursued and with full respect for the essential content of that fundamental right.” As I have argued, child protection is undoubtedly of constitutional value and could warrant the restriction of data protection. On the other hand, the Constitutional Court of Hungary has established that privacy protection is also of constitutional value.[3]

As the second step of the test, based on my previous observations, I must wholeheartedly agree that data protection should only be restricted to the most indispensable extent. Because both of these issues are so intertwined and difficult to balance, we could have a new policy specifically for cases where CSAM is sought by looking into personal data. I firmly believe that this solution could be found, but it would require establishing new agencies that specifically deal with aspects of data protection when it comes to cases like this. The prevalence of this material on the Internet also makes it necessary for us to update laws which are about the relationship of privacy and recordings of CSAM.

I cannot think of a better alternative right now than a slight restriction of privacy, even with the added risks. The way things are progressing, with the added weight of the global pandemic, inflation, war and climate change will lead to more children being sold and used for gain on online platforms, which are often untraceable. Are we willing to leave them to their fate in the name of protecting society as a whole from possibly becoming more totalitarian? Are we on our way to losing privacy anyway?

These are all questions for the future generations of thinkers, who may just develop newer technologies and safer practices, which make balancing these two sides of human rights possible. Until then, I kindly advise everyone reading this article to think through the possible consequences of taking action in either direction. Hopefully, on the International Day of Data Protection, I could gauge your interest in a discussion which could lead to concrete answers and new policies all across the EU in the future.

Mónika MERCZ lawyer, is a PhD student in the Doctoral School of Law and Political Sciences at Károli Gáspár University of the Reformed Church, Budapest. A graduate of the University of Miskolc and former Secretary General of ELSA Miskolc, she currently works as a Professional Coordinator at Public Law Center of Mathias Corvinus Collegium. She is a Member of the Editorial Board at Constitutional Discourse blog.

E-mail: monika@condiscourse.com

[1] (3) Everyone shall have the right to the protection of his or her personal data, as well as to access and disseminate data of public interest.

[2] (4) The application of the right to the protection of personal data and to access data of public interest shall be supervised by an independent authority established by a cardinal Act.

[3] Hungarian Constitutional Court Decision 15/2022. (VII. 14.) [24]

James C. COOPER- John M. YUN: Competing For or Against Privacy? On Using Competition Law to Address Privacy Issues

In our digital economy, privacy has taken center stage. Given that spotlight, we have already seen regulatory intervention into markets with the EU’s GDPR and DMA. (More generally, GDPR and DMA are part of a larger body of regulation that the EU has passed or is contemplating passing to address large platforms. (See Márton Sulyok’s “How to Tackle IT?” published on this blog.) While the verdict is still out, the early empirical evidence strongly suggests that whatever its privacy benefits, the GDPR has had negative economic consequences.

Because the large tech platforms that tend to be in the bullseye of regulators on both sides of the Atlantic give their products away and live off consumer information, a conventional wisdom that has arisen is that market power becomes manifest through degraded privacy protections. In other words, the assertion is that, when platforms have more market power, they lower their privacy quality. Yet, in a recent article, Antitrust & Privacy: It’s Complicated, our empirical results challenge this conventional wisdom. In this blog post, we contribute to the debate surrounding personal data protection that has already been started by Bianka Maksó and Lilla Kiss also on this blog.

Privacy and antitrust have been on a collision course for some time now. For instance, in the U.S., an executive order from the president denounced dominant online platforms for using their market power “to gather intimate personal information that they can exploit for their own advantage.” The chair of the U.S. Federal Trade Commission (FTC) has expressed concern that “[m]onopoly power […] can enable firms to degrade privacy without ramifications.” In the EU, the story is the same. For example, the German Bundeskartellamt brought a case against Facebook based on the theory that violating consumers’ privacy right under the GDPR gave Facebook a data advantage the helped cement its dominant position.

On a superficial level, the negative relationship between privacy quality and market power sounds “right”—after all, we often hear that “if the product is free, then the product is me.” This leads to the following testable hypothesis: if data is the price that we pay for using these free platforms, market power will become manifest through lower levels of privacy.

In our paper, we address this hypothesis both theoretically and empirically. On a theoretical level, equating privacy and price is problematic for several reasons. First, while privacy is a “normal good,” in that, all else equal, consumers prefer more privacy to less, how consumers value privacy relative to other products is uncertain. Specifically, the “Privacy Paradox” suggests that, although consumers profess to care deeply about their privacy in surveys, their revealed behavior suggests otherwise. The root cause of this paradox is the subject of considerable debate. But whether rational choice, asymmetric information, or cognitive biases are to blame is beside the point—if privacy does not drive consumers’ marketplace choices, then privacy is not a relevant dimension of competition. 

Second, unlike price, user data is an input into a larger production process to produce some type of output. That is, unlike a monopolist who enjoys increased profits immediately when they exercise market power by reducing the quality of their product (and, hence, the monopolist’s costs), a firm can profit from increased levels of data collection only by taking an action to monetize them. And this monetization process provides benefits, typically through customization of advertisements or services (e.g., recommendation engines in streaming services or bespoke workouts in fitness apps). Thus, the relationship between the collection of user data and consumer welfare is not necessarily negative—again, unlike in the case for price. Finally, contrary to popular opinion, there is no general economic result that establishes a relationship between greater competition and product quality. To the extent that we view privacy as a dimension of quality, the result carries through—there is no a priori reason to assume that competition is more likely to result in better privacy protection than monopoly.

The relationship between competition intensity and privacy quality is further complicated for multisided platforms, which cater to both users and advertisers/sellers. Put simply, while users may value more privacy, advertisers/sellers value less user privacy. A platform balances the competing incentives of these two groups. Moreover, if we consider that users themselves may benefit from more personalized content generated from user data, then the story is even more complicated.

Theory can only take you so far, however. What is happening in the real world? Namely, what is the empirical relationship between market power and privacy quality? Surprisingly, little work has been done to answer this question. We attempt to fill that void with our study. We examined the relationship between various measures of market concentration—an imperfect proxy for market power, but one used by competition authorities throughout the world—and privacy levels for mobile apps on the Google Android platform and popular websites.

For mobile apps, we measure privacy quality using PrivacyGrade.org, which is a third-party assessment of app quality from a group of researchers at Carnegie Mellon University. Our results suggest no relationship exists between privacy grades and our proxies for market power, that is, market shares based on Google Play Store categories, and market concentration, i.e., the Herfindahl-Hirschman Index (HHI). We also find a robust, negative relationship between privacy and app quality ratings, consistent with a tradeoff between privacy and other dimensions of product quality that consumers value.

For websites, we measure privacy quality using DuckDuckGo’s privacy ratings for websites in thirty-seven website categories (e.g., Search, Health, News). These website categories are from SimilarWeb. While these categories do not necessarily correspond with “relevant product markets” used in competition law, they represent independently created grouping of sites that are based on content tags and website self-identification. Again, the results suggest no relationship between privacy ratings and market concentration measures.

Combined, our empirical results cast serious doubt on the validity of the conventional wisdom that firms exercise market power by reducing privacy, and also suggest that app developers use consumer data to enhance the quality of their products. What this means for competition policy is that antitrust law appears to be a poor vehicle to address perceived privacy problems. To the extent that the marketplace is failing to produce optimal levels of privacy, we suggest that consumer protection aimed at increasing consumer access to information and the firms’ ability to credibly commit to higher privacy quality promises is likely to be the better policy tool.

The presumption that privacy and market power are linked is neither supported by theory nor empirics, which suggests that bringing high-profile antitrust cases against large platforms is unlikely to result in higher levels of privacy protection. The relationship between privacy and market power is complicated, and as such, the debate surrounding competition law and privacy could benefit from an injection of both nuanced theoretical considerations and more empirical evidence.

James C. Cooper is Professor of Law and Director, Program on Economics & Privacy, Antonin Scalia Law School, George Mason University; previously served as Deputy Director of Economic Analysis in the Bureau of Consumer Protection, U.S. Federal Trade Commission.

John M. Yun is Associate Professor of Law and Deputy Executive Director, Global Antitrust Institute, Antonin Scalia Law School, George Mason University; previously served as an Acting Deputy Assistant Director in the Bureau of Economics, Antitrust Division, U.S. Federal Trade Commission.

Lilla Nóra KISS: Professional Ethics and Morality Can Prevent Social Media From Becoming Sovereign

When top Russian diplomat Maria Zakharova explains that George Orwell’s dystopian classic Nineteen Eighty-Four was written to describe the dangers of Western liberalism and not totalitarianism, we may feel as though we are watching an absurd Monty Python satire. In those parodies, artists question facts and overkill conversations with extreme statements to criticize an existing system and discourage the audience from becoming participants in the absurd comedy. While such plays used to primarily cater absurdity for entertainment purposes only, they are gradually starting to normalize the reality that has come of absurdity, which is definitely less enjoyable.

To substantiate this claim, our current reality can be broken down into multiple components of narration: the producers of a play can be viewed as analogous to the owners of media outlets and social media platforms, the directors to the censors, the main actors to the influencers—journalists, politicians, policymakers, and other public figures—and the members of the audience to the users of said media outlets or citizens.

As the reality slowly becomes absurd, members of the passively consuming audience become active participants of the play. Obviously, ownership makes profit-oriented decisions; the aim is to maximize the audience–and thus, the profit. The more extreme and negative the content is, the more people it reaches. The competition to become the most popular outlet slowly pushes the focus from professional, objective, and ethical information-sharing towards somewhat sensationalist content (also known as ‘clickbait’) as human beings struggle with the ‘limited rationality’ mindset, identified by Herbert A. Simon in 1947. Consequently, this competition promotes partially irrational decision-making capabilities. Essentially, humans have to make decisions based on the information available, but due to their cognitive and time limitations, people are vulnerable to the sources of information. Today, social media serves as a general source of news for Americans. According to the Pew Research Center’s survey conducted in January 2021, Facebook stands out as the regular source of news for Americans (54%), while a large portion of Twitter users regularly gets news on the site (59%). Since the resources and capacities are limited, platforms have the green light to filter and pre-digest the news for their users. The cherry-picked news comes from well-selected sources and is directly delivered to the users’ newsfeed. The filtered information behaves as a sub-threshold stimulus that unconsciously supports users’ interpretations of certain topics. Complemented by the content, the description of which lacks objectivity, users are easy targets of polarization. As a result, the demarcation lines between those who agree with a certain opinion and those who disagree become more acute.

In today’s age of surveillance capitalism – as Shoshana Zuboff named the “bloodless battle for power and profit as violent as any the world has seen” – the limit of professional ethics of journalists, politicians, and other influencers is a key question. Another significant point is the owners’ liability for intentionally (trans)forming public opinion. As Count István Széchenyi – often referred to as the Greatest Hungarian –famously expressed, “ownership and knowledge come with responsibilities.” In a world where all information is available on the internet and owners of digital platforms are free to decide what to show to or hide from the masses, ownership over information becomes the most powerful means to shape the future of society. There is no doubt that owners structure societies; the question is if they do it with moral observations or purely for their own financial benefits.

The former approach would be the idealistic scenario: it necessitates a social media environment where platforms’ owners do not intend to form the public opinion and therefore: (1) allow all forms of speech as free speech even if prone to expressing extremism, (2) users could pick and choose freely from millions of pre-generated information upon their consciously and explicitly preselected priorities (which obviously pushes the boundaries of limited human capacities and timeframes), and (3) would not tolerate or apply any cancel culture. This also inherently implies that (4) even personae non gratae — Latin for “people not welcome”— would be allowed to use these platforms even if their views are controversial to the views of the ownership and as such, considered undesirable on their platforms. This would also entail a lack of double standards and a state of objective fairness. At the same time, such an ideal form of social media management would not automatically excuse crossing certain thresholds, such as sharing hate speech content, child pornography, or any other criminal acts, as the platforms would still be legally obligated to take the necessary measures in enabling established public institutions to interact and restore the balance. By the conclusion of this description of the ideal social media platform, there should be no doubt that this utopian scenario does not currently exist.

In the latter case, however, without being overly pessimistic, the world becomes a worse place every day. In this sad but more realistic scenario, private entities are interested in playing with information and using readers’ limited rationality. As a result, owners can intentionally form a public opinion as a side effect of their profit maximization. Of course, the profit-oriented approach is the legitimate interest of corporations—there is nothing wrong with that – until profit maximalization happens in compliance with ethical and moral standards. However, this leads to a very interesting legal dilemma. On the one hand, corporate decisions on allowing or restricting content are legitimate based on Section 230 of the CDA. (The US law setting the standards for ‘decent communications’ since 1996.) However, on the other hand, that decision may lead to illegitimate consequences because corporations have no legitimate authority to act as sovereigns and to form, deform, or transform public opinion by using their power over information. To attain unmanipulated public opinion is, of course, unimaginable and unnecessary in general, but identifying the influencer is crucial. Yet, tracing the influencer is almost impossible in the virtual sphere, hence the question of accountability for these platforms.

Translating the situation into the language of legal theory, the debate is about the relationship between law and morals. Natural law theory holds that law should reflect moral reasoning and should be based on moral order, whereas the theory of legal positivism holds that there is no connection between law and moral order. A symbolic example that highlights the differences from a practical point of view is that Nazi Germany and the Stalinist Soviet Union – two infamous totalitarian regimes of the 20th century – were rule of law regimes from the context of a purely legal positivist interpretation. Under natural law, however, these states were not operating under the rule of law, and their laws were not valid due to the lack of morality of their content. This is the case because natural law requires morality to validate legal content, while legal positivism does not.

Reflecting the opposing views on the current issue, the legal positivist would raise the question, “what does the law say?” for the situation and would provide the ‘easy answer’: private entities, including the owners of social media platforms, are legally entitled to make discretionary decisions regarding the content they share or ban on their own platforms, regardless of the influence they exert over their users. However, natural law would require adding moral values of ‘good’ or ‘bad’, ‘right’ or ‘wrong’ to make an adequate evaluation and give a ‘legal’ or an ‘illegal’ answer. Does these private entities’ exercise of their freedom to influence their users by the content they share lead to legal or illegal outcomes? In addition, if technically an act is legal, does it constitute the use or misuse of corporate freedom under Section 230 CDA? In other words, does Section 230 license these corporations to shape public opinion? Is there any moral standard that the ownership should follow when making their private decisions based on Section 230, especially knowing that the decision may influence and manipulate users? Of course, it is difficult to measure morality as its levels are very relative. It is even more complicated to evaluate morality in the digital sphere. Yet even so, basal minimum moral standards would support both the ownership in making fair decisions and the creation of the most objective environment for the news cycle. Introducing content-neutral and impartial minimum standards based upon morality might therefore help shift the emphasis back to normalcy from a partisan path.

When the producers (owners) introduce moral principles to reach fairness, directors (censors) are free to manage their tasks within the frames of their professional ethics. The main players (the influencers: journalists, politicians, policymakers, and other public figures) are forced to serve the public interest instead of their own interests. The ownership has a huge responsibility for doing good within and for the society. Otherwise, the play becomes an absurd reality produced by quasi omnipotent owners, directed by unethical censors, and influenced by self-interested public figures.

Morality and law together can prevent social media ownership from becoming uncontested, illegitimate sovereigns. Saving the checks and balances to maintain a healthy balance between private and public is important. We could see what happens when the public-private balance is distorted. In communist dictatorships, private entities are weak compared to state actors and have an extremely narrow room for maneuver in their interest advocacy. In two famous communist countries, People’s Republic of China and the Russian Federation, privately-owned media is virtually non-existent: the balance is distorted and private actors are dependent upon public institutions. Dependency, in turn, leads towards toleration of oppression and ideology-based manipulation. As a result, absurd things ensue: the people of Russia may not know they have been invading Ukraine for roughly three months now. In China, Western “traditional” social media is geo-blocked and Chinese have their own platforms. Strong totalitarian states do not bear any private intervention to their decision-making. On the other hand, it is also a mistake when private actors overreach their competences and influence public opinion to serve their private interests.

There is evidence that most people prefer normalcy over extremes. That is good news. Normalcy requires people in the middle to keep a healthy balance between private interest and public interest. Professional ethics, morality, and ownership liability are able to prevent private entities from becoming the new sovereigns that influence alternative movies about digital absurdities.

Lilla Nóra KISS is a postdoctoral visiting scholar at Antonin Scalia Law School, George Mason University, Virginia. Lilla participates in the Hungary Foundation’s Liberty Bridge Program and conducts research in social media regulation and regulatory approaches. Formerly, Lilla was a senior counselor on EU legal affairs at the Ministry of Justice and she has been a researcher and lecturer at the University of Miskolc (Hungary), Institute of European and International Law for five years, where she taught European Union law. Lilla obtained her Ph.D. degree in 2019. The topic of the dissertation is the legal issues of the withdrawal of a Member State from the EU.

Her current research interests cover the legal dimensions of Brexit, the interpretation of the European Way of Life, and the perspectives towards social media regulation in the USA and in Europe.

Bianka MAKSÓ: Lilla Nóra KISS:Do not bury the lede: Your data, their money

Regardless of considering personal data as rights or as things, everyone shall agree that they are worth a lot. Personal data is a commodity, a kind of asset everyone has, but not everybody understands its potential. Do you?

Since Warren and Brandeis defined the “right to be let alone” in 1890, the world has changed a lot. But, even if their concept was among the first milestones of privacy, we still face the same challenges, just in a bigger, higher, stronger, and faster manner.

Personal data is the fuel for the network society created by the digital economy, and the engine of this machine is social media. In our understanding, social media is made up of online platforms fed by the personal data of the masses. Any medium where the content (including but not limited to images, videos, messages, and sound files) is broadcast to, or capable of being broadcast to, the general public. Each elements of the content are made by the users, the framework and finally the free flow of the personal content is made by the social media provider. In conclusion they are users and the website are co-workers in creating social media, it is a jointly produced medium.

The origin (history) of social media is somewhat unclear. The Swedish social networking website LunarStorm, (originally Stajlplejs) was launched in 1996 and described as “the world’s first social media on the Internet” by its founder, Rickard Eriksson. LunarStorm had 1,2 million members. According to the History Cooperative’s article titled “The Complete History of Social Media: A Timeline of the Invention of Online Networking”, social media goes back to 1997, the launch of the first social media platform, called Six Degrees (it lasted until 2001). Six Degrees had 3,5 million members. Although iWiW (International Who is Who) –a Hungarian social networking web service started on 14 April 2002 as iWiW – is not cited often in social media resources, we propose incorporating it into the current discourse. iWiW was unique as it worked on an invitation-basis, which provided a seemingly exclusive personal guarantee to its users. The system was renewed in 2005 and became multi-lingual with other new features. At the peak of its success, iWiW had 4.7 million members with about 1.5 million daily users in a country of ten million (Hungary). By the end of 2010, the same number of people from Hungary had logged into iWiW and Facebook every day, but then the number of Facebook members started to increase because it was better financed. The history of iWiW is worth mentioning from a personal data point of view. On 28 April 2006, T-Online, the internet service branch of Magyar Telekom, purchased iWiW for almost one billion HUF from Virgo Systems Informatikai Kft. Users (mainly Hungarians) expressed concerns that their personal data may be sold to telemarketers or used for other purposes (potentially hurting their privacy). The platform has been defunct since 30 June 2014, and all the users’ data was deleted by then.

Around the millennium, multiple companies entered the online market with similar products, but none of these had significant social support until MySpace reached 115 million members. Facebook was founded in 2004, and by the time of its European market entry (with the establishment of its international headquarters in Dublin) in 2008, it incredibly quickly decreased the popularity of other similar platforms. Interestingly, Compete.com’s study ranked Facebook the most used social networking service worldwide in 2009. At this time, following the Ürümqi riots, China blocked Facebook.

Facebook currently has 2.895 billion monthly active users, 65,9% of which means daily users. User numbers also include approx. 1% of now deceased people, whose data still circulate in the ether. This means an incredibly massive amount of personal data. These numbers empower the social media giant to formulate threats like an ‘incapability to offer a number of their most significant products and services, including Facebook and Instagram, in Europe’ because of the difficult legal situation of the international data transfers to the USA. This announcement, at first look, may seem like something real knowing that Facebook has recently blocked certain contents in Australia because of a bill which would impose fees on tech giants when users share news publisher’s contents. Besides this, it will not likely happen that Facebook will let alone its European users.

Besides Facebook’s trendiness in the Hungarian society (had 5,3 million users from Hungary (population: 9,684,679) in 2019), we witnessed an interesting example of Hungarian legal practice. In 2021, the Hungarian Competition Authority fined Facebook 1,2 billion HUF (approx. 3,75 million USD) for advertising itself as free. Consideration has been given to deceiving consumers by claiming “It’s free and always will be”. Although there is no monetary reward for using the social site, Facebook uses the users’ personal information collected when people use the site. Facebook hands over the target information based on personal data, and this activity brings monetary benefits into the structure, such as from the sale of personalized advertising space.

Legally speaking, from something that is ‘free’, it is expected that the customer not be obliged to give anything in return. However, ultimately, consumers pay with their personal data to use the service, and Facebook is misleading them in making this transactional decision by convincing its complementary feature. Of course, Facebook does not sell the personal data itself, but let the advertiser select the targeted groups they want, based on personal data created by using the platform and collected by the social media provider.

In December 2019, the Hungarian National Authority for Data Protection found that Facebook’s practice was in violation of relevant legal regulation. Facebook then appealed, and as a result, both the Budapest-Capital Regional Court and the Hungarian Supreme Court (Kúria) ruled that the commercial practice was not in violation of Section 6 of Act XLVII of 2008 Prohibition of Unfair Business-to-Consumer Commercial Practices. In the courts’ interpretation, ‘free of charge’ shall mean that the consumer does not have to pay a monetary consideration for the service or does not suffer any other significant disadvantage when using the service. The courts assumed that consumers would accept the Privacy Policy and Terms and Conditions when registering on Facebook. Hence, they are (should be) aware that they are providing data and consent to the processing of their data. In terms of the facts, it is totally indifferent, irrelevant if Facebook later receives a monetary reward from its business partners for handling and transferring the personal data of several consumers. According to the Hungarian Supreme Court, Facebook users are “not more disadvantaged” by tolerating targeted ads than by tolerating generic ads.

However, from a consumer point of view, by tolerating targeted ads based on the users’ personal data, the users allow businesses and Facebook to maximize their profit. This means that targeted ads are not considered more harmful for the users than generic ads. However, the question here, in our view, is not “which type of ads (targeted vs. generic) are more harmful”, but rather “is the term free misleading in a situation where users give something in exchange”. We can examine the situation from a purely consumer-protection point of view, and can raise the question: whether the reasonable consumer would come to the conclusion that the privacy harm shall be deemed as a price just like any monetary obligation. To tell the truth, most of them would not say that, having in mind that many of them create such content in which they open their private life to the public intentionally. Most of the users do not consider targeted ads a privacy harm but “helpful assistance of the majestic Internet” to find their preferences and products fulfilling their needs in an easier, quicker, and cheaper way.

The legal evaluation of something ‘being for free’ is dependent on whether we treat personal data as a commodity (that could be subject to financial transactions and has a monetary value) or a right. The evaluation is complex as European legal systems usually handle personal data as subject to rights-based protection while the US approach considers it a commodity. One of the most effective legal means of protecting privacy is to guarantee the protection of personal data and informational self-determination. The latter means that everyone has the right to decide what information they share about themselves or what information they do not disclose to the public. Consequently, the regulation of shared or undisclosed personal data becomes similar to that of private property (things).

Posner said personal data is a commodity. To make it simple: people sell themselves like products, and if they hide certain features (i.e., do not disclose personal information), they put themselves in a better light. Posner concludes that privacy outright condemns the fact that legislation provides a right for an individual to withhold information about himself because it distorts or misleads the market. For example, if a candidate discloses a false profile in a job interview, the prospective employer may not be hiring the best employee for that job. At the same time, Posner acknowledges the right to self-protection so that others cannot explore the characteristics of a persons’ undisclosed privacy.

The “commodity” aspect also reveals the differences between the US and European approaches to personality, personal reputation, and personal data protection. For example, the European concept of “the right to be forgotten” is bizarre in US legal thinking, which prefers transparency and the free flow of information under the First Amendment. Therefore, the USA does not recognize this right “of having an imperfect past” even if this was declared by the Court of Justice of the European Union back in 2014. The US courts declared that the right to be forgotten is impermissible under the First Amendment.

Did you know that a U.S. citizen is willing to pay $29 to protect their personal information and pay just 50 cents more for a product offered by a merchant who has taken steps to protect the buyers’ personal information?[1] This is called the privacy paradox, which implies that data subjects (users) expect to protect their personal data but do not want to provide (material) assets for this. The other side of this coin is the platform’s side. Facebook CEO, Mark Zuckerberg expressed in 2010 that “Privacy is no longer the social norm”. This could mean for us that users no longer care about their privacy. This is true, especially if users tend to provide their (sensitive) data even for considerably smaller benefits.

An excellent example is the club card system of multinational companies. They usually ask consumers to provide their e-mail address or telephone number, sign up for newsletters, and immediately get a discount, a coupon, personal ads, and further benefits. Gamification is a smart tool for motivating customers and building trust and loyalty. At the same time, companies may increase customer engagement via positive reinforcement of providing rewards to loyal customers. When this happens via the “connect your social media account” button, the customer shares much more personal information than by providing an email address. Giving access to the social media profile enables companies to get to know (pretty well) their consumers and target them with personal–direct ads. Profiling through automated decision-making and AI raises data protection concerns. This level of surveillance capitalism raises legal concerns starting with the question: do we consider personal data as subject to rights that protect it, or treat them as commodities?

One thing is sure: The personal data market is huge today, and ownership of these assets is not exercised by the data subjects (users), but data controllers use the collected, organized data and are not shy to sell it and turn enormous profits thereon.

In the early 1990s, Laudon took the position that the solution was not legislation but creating an information market. He envisaged that the data subject would provide his data and assign them within the Market to a group where a person with similar characteristics or preferences has data. Anyone who offers something to each group buys that data set, and a portion of the price (‘the dividend’) will go to the data subject. There would also be agents in the Market who would act commission-likely when selling the data entrusted to them.

Perhaps a different conclusion is that the European attitude typically interprets data protection as a set of rights to protect individual privacy. In contrast, the common law and the American attitude typically treat personal data as an object of property and still treat it so in economic relations. Although, both aspects are reasonable, they have one feature in common: transferability. Even if personal data is deemed to be a commodity, or a right, both can be transferred. We can sell the raw personal data – sometimes without any price in return -, and can also transfer the right to the data controller to process them. Regardless of which framework is better, the urgent need for social media regulation and awareness-raising of users is expected.

[1] ACQUISTI, Alessandro: The Economics of Privacy: Theoretical and Empirical Aspects, Carnegie Mellon University, September 12, 2013, 16. pp.

Bianka MAKSÓ is a data protection advisor and an adjunct lecturer of the Data Protection LLM Program at the University of Miskolc. She defended her PhD thesis in 2019 focusing on the GDPR and the Binding Corporate Rules.

Lilla Nóra KISS is a visiting scholar at Antonin Scalia Law School, George Mason University. Participates in the Hungary Foundation’s Liberty Bridge Program, does her postdoctoral research in social media regulation in a comparative approach. Lilla obtained her Ph.D. degree in 2019. The topic of the dissertation is the legal issues of the Brexit.