by admin | Apr 1, 2022 | Privacy & Data Protection
Regardless of considering personal data as rights or as things, everyone shall agree that they are worth a lot. Personal data is a commodity, a kind of asset everyone has, but not everybody understands its potential. Do you?
Since Warren and Brandeis defined the “right to be let alone” in 1890, the world has changed a lot. But, even if their concept was among the first milestones of privacy, we still face the same challenges, just in a bigger, higher, stronger, and faster manner.
Personal data is the fuel for the network society created by the digital economy, and the engine of this machine is social media. In our understanding, social media is made up of online platforms fed by the personal data of the masses. Any medium where the content (including but not limited to images, videos, messages, and sound files) is broadcast to, or capable of being broadcast to, the general public. Each elements of the content are made by the users, the framework and finally the free flow of the personal content is made by the social media provider. In conclusion they are users and the website are co-workers in creating social media, it is a jointly produced medium.
The origin (history) of social media is somewhat unclear. The Swedish social networking website LunarStorm, (originally Stajlplejs) was launched in 1996 and described as “the world’s first social media on the Internet” by its founder, Rickard Eriksson. LunarStorm had 1,2 million members. According to the History Cooperative’s article titled “The Complete History of Social Media: A Timeline of the Invention of Online Networking”, social media goes back to 1997, the launch of the first social media platform, called Six Degrees (it lasted until 2001). Six Degrees had 3,5 million members. Although iWiW (International Who is Who) –a Hungarian social networking web service started on 14 April 2002 as iWiW – is not cited often in social media resources, we propose incorporating it into the current discourse. iWiW was unique as it worked on an invitation-basis, which provided a seemingly exclusive personal guarantee to its users. The system was renewed in 2005 and became multi-lingual with other new features. At the peak of its success, iWiW had 4.7 million members with about 1.5 million daily users in a country of ten million (Hungary). By the end of 2010, the same number of people from Hungary had logged into iWiW and Facebook every day, but then the number of Facebook members started to increase because it was better financed. The history of iWiW is worth mentioning from a personal data point of view. On 28 April 2006, T-Online, the internet service branch of Magyar Telekom, purchased iWiW for almost one billion HUF from Virgo Systems Informatikai Kft. Users (mainly Hungarians) expressed concerns that their personal data may be sold to telemarketers or used for other purposes (potentially hurting their privacy). The platform has been defunct since 30 June 2014, and all the users’ data was deleted by then.
Around the millennium, multiple companies entered the online market with similar products, but none of these had significant social support until MySpace reached 115 million members. Facebook was founded in 2004, and by the time of its European market entry (with the establishment of its international headquarters in Dublin) in 2008, it incredibly quickly decreased the popularity of other similar platforms. Interestingly, Compete.com’s study ranked Facebook the most used social networking service worldwide in 2009. At this time, following the Ürümqi riots, China blocked Facebook.
Facebook currently has 2.895 billion monthly active users, 65,9% of which means daily users. User numbers also include approx. 1% of now deceased people, whose data still circulate in the ether. This means an incredibly massive amount of personal data. These numbers empower the social media giant to formulate threats like an ‘incapability to offer a number of their most significant products and services, including Facebook and Instagram, in Europe’ because of the difficult legal situation of the international data transfers to the USA. This announcement, at first look, may seem like something real knowing that Facebook has recently blocked certain contents in Australia because of a bill which would impose fees on tech giants when users share news publisher’s contents. Besides this, it will not likely happen that Facebook will let alone its European users.
Besides Facebook’s trendiness in the Hungarian society (had 5,3 million users from Hungary (population: 9,684,679) in 2019), we witnessed an interesting example of Hungarian legal practice. In 2021, the Hungarian Competition Authority fined Facebook 1,2 billion HUF (approx. 3,75 million USD) for advertising itself as free. Consideration has been given to deceiving consumers by claiming “It’s free and always will be”. Although there is no monetary reward for using the social site, Facebook uses the users’ personal information collected when people use the site. Facebook hands over the target information based on personal data, and this activity brings monetary benefits into the structure, such as from the sale of personalized advertising space.
Legally speaking, from something that is ‘free’, it is expected that the customer not be obliged to give anything in return. However, ultimately, consumers pay with their personal data to use the service, and Facebook is misleading them in making this transactional decision by convincing its complementary feature. Of course, Facebook does not sell the personal data itself, but let the advertiser select the targeted groups they want, based on personal data created by using the platform and collected by the social media provider.
In December 2019, the Hungarian National Authority for Data Protection found that Facebook’s practice was in violation of relevant legal regulation. Facebook then appealed, and as a result, both the Budapest-Capital Regional Court and the Hungarian Supreme Court (Kúria) ruled that the commercial practice was not in violation of Section 6 of Act XLVII of 2008 Prohibition of Unfair Business-to-Consumer Commercial Practices. In the courts’ interpretation, ‘free of charge’ shall mean that the consumer does not have to pay a monetary consideration for the service or does not suffer any other significant disadvantage when using the service. The courts assumed that consumers would accept the Privacy Policy and Terms and Conditions when registering on Facebook. Hence, they are (should be) aware that they are providing data and consent to the processing of their data. In terms of the facts, it is totally indifferent, irrelevant if Facebook later receives a monetary reward from its business partners for handling and transferring the personal data of several consumers. According to the Hungarian Supreme Court, Facebook users are “not more disadvantaged” by tolerating targeted ads than by tolerating generic ads.
However, from a consumer point of view, by tolerating targeted ads based on the users’ personal data, the users allow businesses and Facebook to maximize their profit. This means that targeted ads are not considered more harmful for the users than generic ads. However, the question here, in our view, is not “which type of ads (targeted vs. generic) are more harmful”, but rather “is the term free misleading in a situation where users give something in exchange”. We can examine the situation from a purely consumer-protection point of view, and can raise the question: whether the reasonable consumer would come to the conclusion that the privacy harm shall be deemed as a price just like any monetary obligation. To tell the truth, most of them would not say that, having in mind that many of them create such content in which they open their private life to the public intentionally. Most of the users do not consider targeted ads a privacy harm but “helpful assistance of the majestic Internet” to find their preferences and products fulfilling their needs in an easier, quicker, and cheaper way.
The legal evaluation of something ‘being for free’ is dependent on whether we treat personal data as a commodity (that could be subject to financial transactions and has a monetary value) or a right. The evaluation is complex as European legal systems usually handle personal data as subject to rights-based protection while the US approach considers it a commodity. One of the most effective legal means of protecting privacy is to guarantee the protection of personal data and informational self-determination. The latter means that everyone has the right to decide what information they share about themselves or what information they do not disclose to the public. Consequently, the regulation of shared or undisclosed personal data becomes similar to that of private property (things).
Posner said personal data is a commodity. To make it simple: people sell themselves like products, and if they hide certain features (i.e., do not disclose personal information), they put themselves in a better light. Posner concludes that privacy outright condemns the fact that legislation provides a right for an individual to withhold information about himself because it distorts or misleads the market. For example, if a candidate discloses a false profile in a job interview, the prospective employer may not be hiring the best employee for that job. At the same time, Posner acknowledges the right to self-protection so that others cannot explore the characteristics of a persons’ undisclosed privacy.
The “commodity” aspect also reveals the differences between the US and European approaches to personality, personal reputation, and personal data protection. For example, the European concept of “the right to be forgotten” is bizarre in US legal thinking, which prefers transparency and the free flow of information under the First Amendment. Therefore, the USA does not recognize this right “of having an imperfect past” even if this was declared by the Court of Justice of the European Union back in 2014. The US courts declared that the right to be forgotten is impermissible under the First Amendment.
Did you know that a U.S. citizen is willing to pay $29 to protect their personal information and pay just 50 cents more for a product offered by a merchant who has taken steps to protect the buyers’ personal information?[1] This is called the privacy paradox, which implies that data subjects (users) expect to protect their personal data but do not want to provide (material) assets for this. The other side of this coin is the platform’s side. Facebook CEO, Mark Zuckerberg expressed in 2010 that “Privacy is no longer the social norm”. This could mean for us that users no longer care about their privacy. This is true, especially if users tend to provide their (sensitive) data even for considerably smaller benefits.
An excellent example is the club card system of multinational companies. They usually ask consumers to provide their e-mail address or telephone number, sign up for newsletters, and immediately get a discount, a coupon, personal ads, and further benefits. Gamification is a smart tool for motivating customers and building trust and loyalty. At the same time, companies may increase customer engagement via positive reinforcement of providing rewards to loyal customers. When this happens via the “connect your social media account” button, the customer shares much more personal information than by providing an email address. Giving access to the social media profile enables companies to get to know (pretty well) their consumers and target them with personal–direct ads. Profiling through automated decision-making and AI raises data protection concerns. This level of surveillance capitalism raises legal concerns starting with the question: do we consider personal data as subject to rights that protect it, or treat them as commodities?
One thing is sure: The personal data market is huge today, and ownership of these assets is not exercised by the data subjects (users), but data controllers use the collected, organized data and are not shy to sell it and turn enormous profits thereon.
In the early 1990s, Laudon took the position that the solution was not legislation but creating an information market. He envisaged that the data subject would provide his data and assign them within the Market to a group where a person with similar characteristics or preferences has data. Anyone who offers something to each group buys that data set, and a portion of the price (‘the dividend’) will go to the data subject. There would also be agents in the Market who would act commission-likely when selling the data entrusted to them.
Perhaps a different conclusion is that the European attitude typically interprets data protection as a set of rights to protect individual privacy. In contrast, the common law and the American attitude typically treat personal data as an object of property and still treat it so in economic relations. Although, both aspects are reasonable, they have one feature in common: transferability. Even if personal data is deemed to be a commodity, or a right, both can be transferred. We can sell the raw personal data – sometimes without any price in return -, and can also transfer the right to the data controller to process them. Regardless of which framework is better, the urgent need for social media regulation and awareness-raising of users is expected.
[1] ACQUISTI, Alessandro: The Economics of Privacy: Theoretical and Empirical Aspects, Carnegie Mellon University, September 12, 2013, 16. pp.
Bianka MAKSÓ is a data protection advisor and an adjunct lecturer of the Data Protection LLM Program at the University of Miskolc. She defended her PhD thesis in 2019 focusing on the GDPR and the Binding Corporate Rules.
Lilla Nóra KISS is a visiting scholar at Antonin Scalia Law School, George Mason University. Participates in the Hungary Foundation’s Liberty Bridge Program, does her postdoctoral research in social media regulation in a comparative approach. Lilla obtained her Ph.D. degree in 2019. The topic of the dissertation is the legal issues of the Brexit.
by admin | Nov 22, 2023 | European Union
There is a recent French decision, the “French Data Network” decision from 2021, in connection with the primacy of EU law from the Council of State (Conseil d’État). The aim of this post is to study this decision and to review its main statements regarding the French practice of the EU law application. French constitutional identity is strong and builds on a solid historical background, however, it must be pointed out that France is one of the most dedicated supporters of the European Union toward an ever-closer Union and further integration. Nevertheless, the Data Network case has far-reaching implications on the interpretation of EU law, as well as how France, the engine of integration, tries to reconcile EU law with its own national (constitutional) law when adjudicating EU law related cases.
Background of the case
People generate metadata through their online activities. Previous searches, IP addresses, location, or communication on social media are part of the private lives of people using the internet. However, the law in recent years more or less reacted to the changes in online data processing and privacy, rather than taking actions against platforms gathering data from people and initiating a protective legal environment rather than giving a chance for malpractice to occur. Regarding the privacy of personal data, there is always a crucial aspect to who can access these data and to what extent. (More on the (mis)usage of our personal data here, and on the “regulation revolution” here.)
The French telecommunications law (available here) is favorable toward the retention of data when it comes to purposes of national security. The French regulatory provisions in the case at hand (Post and Electronic Communications Code Article R10-13) allow the authorities for a maximum period of one year to retain in a general and indiscriminate manner data from the electronic communications operators for the purposes of investigating, establishing and prosecuting criminal offenses. According to Article R10-13 of the Code, this data allows for the identification of the user, the location of the communication, and the technical characteristics (date, time, and duration) of any communication. Moreover, the Internal Security Code (precisely Book VIII) also allows certain techniques to prevent acts of terrorism by surveilling the electronic communication of persons.
Enabling state organs to retain data of individuals in general is highly debatable, even in the legal community. This does not however change the fact that France has had a lengthy list of terrorist attacks committed against its citizens from the recent years. In 2021, the same year the French Data Network decision came out, there were 140 terrorism-related arrests in France. The country also experienced the highest number of attacks (there were five) in Europe. This is an important sidebar before delving into the facts of the case, and before making any conclusions about the decision of France regarding the retention of metadata.
Constitutional and European law aspects
The primacy of European law establishes that EU law always prevails when a conflict arises between an aspect of EU law and an aspect of national law. This legal principle was questioned several times throughout the history of the European Union, in earlier posts I have mentioned a few of these and presented them. France is famous for its creative solutions “mingling” compliance with both its Constitution and the relevant EU law.
The Constitution of the Fifth Republic emphasizes in Article 88(1) that the French national law must comply with EU law. Under the primacy principle of the EU law, this also means the necessary adaptation of national law to European regulations. However, if under EU law there are no effective guarantees for constitutional requirements, the administrative court must set aside EU law to the extent required to guarantee compliance with the Constitution. In this case, constitutional requirements such as protecting France from criminal activity and terrorism. To this end, in modern crime prevention techniques, the use of metadata is highly effective, therefore France protects its laws concerning the allowance of such retention of data.
Lost and Found? The Conclusions of the French Data Network Case
The most important finding regarding the case was that France preserves its capacity to store the above-mentioned data of individuals for intelligence purposes (it is not lost). However, the way it establishes its capacity for retention is a clever legal maneuver from the Council of State. In the words of Thibaut Larrouturou, rather than deciding between the superiority of the national law over the EU law (or the other way around with the Constitution having the highest authority), the Council of State chose a conciliatory perspective for resolving the legal conflict. This legal conflict being the French government asking the Council of State not to apply relevant CJEU precedent stating that the storage of personal data interferes with the privacy of individuals. (By stating the above, Tele2 Sverige, limits national laws for the general retention of data to the most serious threats to public security and prohibits its usage for the discovery of ordinary crimes. In this regard, however, the French Code on data retention established a wider competence than the case law of the CJEU permitted. Article 15 of the Directive on Privacy and Electronic Communications also deals with this question, however, it gives the Member States the possibility to take the necessary measures for the protection of public security, defense, State security, and the enforcement of criminal law.
The reconciliation between EU law and constitutional law
To quote Brunessen Bertrand, the Council of State found a very pragmatic approach to the case at hand, rather than setting aside the EU law. The Council of State in its judgment drew attention to an inconsistency in the solution of the CJEU regarding the retention of data: any targeted retention proves impossible to apply since it is obviously not possible to anticipate the commission of a crime in advance. Therefore, indiscriminate retention, prohibited by the CJEU in “ordinary crimes”, is necessary for the effectiveness of criminal investigations.
So what exactly happened? Did the Council of State disregard the primacy of EU law by overruling the jurisprudence of the CJEU? Technically, the Council of State overruled the CJEU without formally overruling the CJEU. This was possible because the Council did not overlook the CJEU jurisprudence, as it only clarified a legal error. In reality, there is no way to anticipate the degree of a crime that will be committed in the future. Crimes can only be evaluated after they have been committed, therefore member states could not store data which could lead to catching the offenders.
The Council of State in the French Data Network case also implicitly referred to the importance of the French Constitution, or even to the supremacy of the Constitution by essentially neutralizing the case law of the CJEU to be utilized. To quote Professor Bertrand once more, the Council neutralized EU law through its interpretation in a way that the decision’s finding complies with the constitutional requirement of the necessity to protect the public order.
Márton Balogh is a law student in his fourth undergraduate year at the University of Pécs, Hungary, and a student at the MCC Law School. As of this year, he is a holder of the graduate scholarship of the Aurum Foundation. He is mostly interested in European law. His current study and research interests include the practice of the European Court of Justice in the Common Foreign and Security Policy, the primacy of European law, and migration and asylum law in the European Union. He envisions his future working in the European Union, where he currently interns at the European Parliament.
by admin | Oct 2, 2023 | Privacy & Data Protection
When we say the words “data protection”, for most of us, The European Union’s General Data Protection Regulation, GDPR comes to mind. However, there are many different data protection laws from around the world, which I shall attempt to briefly showcase in this post.
First of all, I must point out that data protection has historically always had a huge presence in the continent of Europe, so the fact that the EU now has strict legislation in place to protect privacy is no surprise. The first ever data protection law was Sweden’s Data Act, which was passed in 1973, and came into effect the following year. In 1981, the Council of Europe adopted the Data Protection Convention, rendering the right to privacy a legal imperative. It is important to note that privacy and data protection are not the same, but they are closely intertwined, especially when we talk about the effectiveness of protecting personal data. There were many preparatory documents and various milestones in the EU before GDPR came into effect. Surprisingly though, this is not where the right to privacy first emerged. In fact, that place would be the United States and the year 1890, when two US lawyers, Samuel D. Warren and Louis Brandeis, wrote The Right to Privacy, an article that argued the “right to be left alone”, using the phrase as a definition of privacy. From then on, this right made it into international agreements, and slowly gained popularity, culminating in becoming a crucial aspect of our lives. With the technological advances of AI and other technologies relying on data, privacy has become something precious and fragile. How good of a job does the EU do in protecting it? What about the US?
Currently, when ranking countries by privacy focusing on Internet users’ rights and the Internet privacy laws each country has in place, Estonia, Island and Costa Rica sit at the top, followed by Canada, Georgia and Armenia. Unsurprisingly, China came last in the ranking of 70 different countries: but even China has privacy laws in place. The Personal Information Protection Law (‘PIPL’) entered into effect on 1 November 2021 and is China’s first comprehensive data protection, governing personal information processing activities carried out by entities or individuals within China. Together with this law, the Cybersecurity Law and the Data Security Law were introduced. The PIPL is partly modeled after the GDPR, containing principles of personal information processing, consent and non-consent grounds for processing, but there is no single specific authority in China that has responsibility for the supervision of compliance with personal data related laws.
Similarly modeled after the GDPR is the Privacy Amendment (Notifiable Data Breaches) to Australia’s Privacy Act, Brazil’s Lei Geral de Proteçao de Dados (LGPD), Egypt’s Law on the Protection of Personal Data, and India’s Personal Data Protection Bill. Despite the close resemblance, there are clear differences: for example in India, more discretion is given to India’s Central Government to decide how it is enforced and when exceptions can be made. In Egypt, the fines for non-compliance are significantly lower than GDPR with a minimum of 100,000 LE (approx. 5,560 EUR) and a maximum of 1 million LE (approx. 55,600 EUR), but data breaches could also result in prison time.
New amendments to New Zealand’s 1993 Privacy Act came into effect on December 1, 2020, and similarly to GDPR, there is a requirement to notify authorities and affected parties of data breaches and the introduction of new restrictions to offshore data transfer. However, the fines for non-compliance are significantly lower than with GDPR (the maximum fine is just 10,000 NZD, however there is a mechanism in place for class action suits), and the “right to be forgotten” is not included in the Privacy Act.
These are some of the data protection laws in place which have significant similarities to the GDPR, but seeing that no EU country except for Estonia made it into the ranking of the best countries by Internet users’ privacy, it is worth asking whether GDPR is actually the best regulation out there.
While researching this topic, I have found that 137 out of 194 countries had put in place legislation to secure the protection of data and privacy. The continents of Africa and Asia are at 61 and 57 percent of countries having adopted such legislations. Naturally, some form of legislation is better than no safeguards in relation to privacy, but I think that the most important aspect of any law is not the written word, but how it is enforced in practice. Personally, I believe that the true effect of GDPR does not come from the specific text alone, but rather how it has shaped the way other countries relate to data protection, and how significant the case law has become since data breaches were taken seriously. The laws I briefly mentioned have ever-expanding requirements, new legislation is put in place in several countries (such as Canada’s New Data Privacy Law (CPPA)). The law on data protection might be completely different within a country, like in the case of the US, where while there are no formal laws at the federal level, there is some federal legislation that protects data on a more general level. Knowing that it might restrict competitiveness for businesses, the US typically does not have strict laws in place. Several US states have created their own laws, with California’s California Consumer Privacy Act (CCPA) providing privacy rights and consumer protection, which allows for residents of the state to establish precisely how their personal data is being collected and what it is being used for. The New York Privacy Act obligates companies to acquire consumer’s consent, disclose their de-identification processes, and install controls and safeguards to protect personal information. There are laws in place in Colorado, Connecticut and Virginia, with bills introduced in Utah, Indiana, Iowa, Montana, Oregon, Tennessee and Texas. While there had been a EU-US Privacy Shield framework in place to make GDPR compliance more understandable for organizations operating on both sides of the Atlantic, the agreement was struck down by the European Court of Justice, as they were of the opinion that that the rights of EU data subjects were not adequately protected from US surveillance.
Data protection is a national security issue, so it is understandable that different nations might feel apprehensive about data flow. But we must understand that we are living in a world that is so interconnected that simply creating data protection laws will never be enough to actually make sure there is no misuse or data breaches. But is cooperation possible on an international level in such a sensitive matter? Experts have previously made a case for a global privacy standard, which would be easier on data protection officers and authorities, stating that “while the European Data Protection Board has provided guidance about adequacy thresholds, each company’s risk assessment necessarily will be subjective and result in inconsistent application of the GDPR’s data privacy scheme.”. There is a data privacy international treaty in place, which is wholly ineffective: this leads me back to my point about the importance of implementation when it comes to any regulation. As long as different nations have diverging interests – which will always be the case – an international data protection treaty seems far away. For the purpose of business many countries attempt to comply with the GDPR, which forced its way into the consciousness of the international committee, but is still often ignored by those companies which are powerful enough to pay a fine and not change their lucrative practice of selling personal data.
So what is the solution? Can we find any common ground in relation to privacy laws from around the world, especially with the emergence of newer technologies and AI legislation also taking precedence worldwide? Or will we just keep trying to comply with differing regulations until one day we find that privacy has vanished altogether – if it hasn’t already?
Only time will tell what this possibility means for the future of data protection, but one thing is for sure: privacy laws became more significant in the eyes of world leaders through legislative effort from the EU, and are here to stay. Let’s hope that something similar will happen with regard to Artificial Intelligence, so that we may have an imperfect, but slightly safer future.
Mónika Mercz, JD, is specialized in English legal translation, Junior Researcher at the Public Law Center of Mathias Corvinus Collegium Foundation in Budapest while completing a PhD in Law and Political Sciences at the Károli Gáspár University of the Reformed Church in Budapest, Hungary. Mónika’s past and present research focuses on constitutional identity in EU Member States, with specific focus on essential state functions, data protection aspects of DNA testing, environment protection, children’s rights and Artificial Intelligence.
Email: mercz.monika@mcc.hu
by Mónika Mercz | Jan 28, 2023 | Privacy & Data Protection, Tech & AI
The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) adopted a Joint Opinion on the Proposal for a Regulation to prevent and combat child sexual abuse on the 29th of July, 2022. While this has not made huge waves in the public discourse, we must take a moment to discuss what this stance means for how we view data protection in relation to child protection, and specifically fighting against online child sexual abuse material (CSAM). The International Data Protection Day seems like a good occasion to contribute to this debate.
The Proposal’s aim was to impose obligations when it comes to detecting, reporting, removing and blocking known and new online CSAM. According to the Proposal, the EU Centre and Europol would work closely together, in order to transmit information regarding these types of crime. The EDPB and EDPS recommend that instead of giving direct access to data for the purposes of law enforcement, each case should be first assessed individually by entities in charge of applying safeguards intended to ensure that the data is processed lawfully. In order to mitigate the risk of data breaches, private operators and administrative or judicial authorities should decide if the processing is allowed.
While child sexual exploitation must be stopped, the EDPB stated that limitations to the rights to private life and data protection shall be upheld, thus only strictly necessary and proportionate information should be retained in these cases. The conditions for issuing a detection order for CSAM and child solicitation lack clarity and precision. This could unfortunately lead to generalised and indiscriminate scanning of content of virtually all types of electronic communications. But is our privacy’s safety truly worth the pain suffered by minors? Is it not already too late for our society to try to put privacy concerns first anyway? I believe that this issue is much more multifaceted than would seem at first glance.
There are additional concerns regarding the use of artificial intelligence to scan users’ communications, which could lead to erroneous conclusions. While human beings make mistakes too, the fact that AI is not properly regulated is a big issue. This fault in the system may potentially lead to several false accusations. EDPB and EDPS shared that in their opinion “encryption contributes in a fundamental way to the respect of private life and to the confidentiality of communications, freedom of expression, innovation and growth of the digital economy.” However, it must be noted that more than one million reports of CSAM happened in the European Union in 2020. The COVID-19 pandemic was undoubtedly a factor in the 64% rise in such reports in 2021 compared to the previous year. This is cause for concern, and should be addressed properly.
In light of these opposing views about the importance of individuals’ rights, I aim to find some semblance of balance. The real question is: how can we ensure that every child is protected from sexual exploitation, perpetrators are found and content is removed, while protecting ourselves from becoming completely transparent and vulnerable?
- Why should we fight against the online sexual exploitation of children?
First of all, I would like to point out how utterly vital it is to protect children from any form of physical, psychological or sexual abuse. Protecting children is not only a moral issue, but also the key to humanity’s future. I would like to provide some facts, underlined by mental health experts. We know that any form of child sexual exploitation has short-term effects including exhibiting regressive behavior; performance problems at school; and an unwillingness to participate in activities. Long-term effects include depression, anxiety-related behavior or anxiety, eating disorders, obesity, repression, sexual and relationship problems. These serious issues can affect people well into adulthood, culminating in a lower quality of life, thus enabling members of society to become less productive.
In addition to these serious psychological consequences, the fundamental rights of victims are infringed, such as the human rights to life, health, personal freedom and security, as well as their right not to be tortured or exposed to other inhuman, cruel or degrading treatment, as guaranteed by the UDHR and other international laws. In addition to the efforts made by countries that ratified the Convention on the Rights of the Child, I also must mention the United States Supreme Court decision and lower court decisions in United States v. Lanier. In this case we can see that in the US’s interpretation, sexual abuse violates a recognized right of bodily integrity as encompassed by the liberty interest protected by the 14th Amendment. Although this American finding dates back to 1997, it does not strip the statement from validity in our online world.
To speak about the legal framework governing the issue in my home country, Hungary, the Fundamental Law also protects the aforementioned rights. Article XV under “Freedom and Responsibility’” states that “(5) By means of separate measures, Hungary shall protect families, children, women, the elderly and those living with disabilities.” While this is an excellent level of protection, I would propose that we need to add the part “Hungary shall take measures to protect children from all forms of sexual exploitation”, or even if we do not add it into our constitution, we must make it a priority. Act XXXI. of 1997 on the Protection of Children and the Administration of Guardianship is simply not enough to help keep children safe against new forms of sexual abuse, in particular, online exploitation. With the dark web providing a place for abusers to hide behind, what options do we have to expose these predators and recover missing children?
A study explored a sample of 1,546 anonymous individuals who voluntarily responded to a survey when searching for child sexual abuse material on the dark web. 42% of the respondents said that they had sought direct contact with children through online platforms after viewing CSAM. 58% reported feeling afraid that viewing CSAM might lead to sexual acts with a child or adult. So we can see that the situation is indeed dire and needs a firm response on an EU level, or possibly even on a wider international level. Sadly, cooperation between countries with different legal systems is incredibly difficult, time-consuming and could also lead to violations of privacy as well as false accusations and unlawful arrests. This is where several of the concerns of EDPB and EDPS arose in addition to the data protection aspects mentioned before.
- Avoiding a Surveillance State
Having talked about the effects and frequency of child sexual abuse online, I have no doubt that the readers of this blog agree that drastic steps are needed to protect our most vulnerable. However, the issue is made difficult by the fear that data provided by honest people wishing to help catch predators could lead to data protection essentially losing its meaning. There are many dire consequences that could penetrate our lives if data protection were to metaphorically “fall”. It is enough to think about China’s Social Credit System and surveillance state, that is a prime example of what can happen if the members of society become transparent instead of the state. Uncontrolled access to anyone’s and everyone’s data under the guise of investigation into cases of online abuse could easily lead to surveillance capitalism getting stronger, our data becoming completely visible and privacy essentially ceasing to exist.
Right now, personal data is protected by several laws, namely the GDPR, and in Hungary, Act CXII of 2011 on the Right of Informational Self-Determination and on Freedom of Information. This law is upheld in particular through the work of the Hungarian National Authority for Data Protection and Freedom of Information. The Fundamental Law of Hungary also upholds the vital nature of data protection in its Article VI (3)[1] and (4)[2]. I advise our readers to take a look at the relevant legal framework themselves, but I shall focus on the pertinent data-protection aspects for the sake of this discussion.
There are several declarations by politicians and institutions alike that reinforce how essential this field of law is. This is of course especially true in the case of the European Union. As has been previously stated in one of our posts here on Constitutional Discourse, by Bianka Maksó and Lilla Nóra Kiss, the USA has a quite different approach. But can we justify letting children go through horrific trauma in order to protect our personal information? Which one takes precedence?
- A moral issue?
On the most basic level, we might believe that our information cannot be truly protected, so we might as well take a risk and let our data be scanned, if this is the price we must pay in order to protect others. But are we truly protecting anyone, if we are making every person on Earth more vulnerable to attacks in the process?
The Constitutional Court of Hungary has long employed the examination of necessity and proportionality in order to test which one of two fundamental rights need to be restricted if there is collision. I shall defer to their wisdom and aim to replicate their thought process in an incredibly simplified version – as is made necessary by the obvious limitations of a blog post. My wish is to hypothesize if we could justify infringement of data protection in the face of a child’s right to life/ development.
First of all, I shall examine if the restriction of our right to data protection is absolutely necessary. If the right of children not to suffer sexual exploitation online (which, again contains facets of their right to life, health, personal freedom and security, as well as their right not to be tortured or exposed to other inhuman, cruel or degrading treatment) can be upheld in any other – less intrusive but still proper – way other than giving up data protection, then restricting privacy is not necessary. While experts’ opinion leans towards the point of view that privacy must be upheld, I would like to respectfully try to see it from another side. Currently we are trying to implement measures to stop online child abuse in all its forms, but it yields few results. The issue is growing. Many claim that a form of cooperation between law enforcement, hackers, different countries and many other actors could lead to curbing this crime further. Could we ever completely stop it? Probably not. But could we uphold their right not to be tortured or exposed to other inhuman, cruel or degrading treatment and to a healthy development? Maybe.
I put forward the idea that at this point we have no other, more effective measure to stop online child sexual abuse other than restricting our own protection of personal data to a degree – child protection is a public interest and protecting our posterity also has constitutional value. Additionally, the Fundamental Law of Hungary contains in its Article I (3) that “(…) A fundamental right may only be restricted to allow the effective use of another fundamental right or to protect a constitutional value, to the extent absolutely necessary, proportionate to the objective pursued and with full respect for the essential content of that fundamental right.” As I have argued, child protection is undoubtedly of constitutional value and could warrant the restriction of data protection. On the other hand, the Constitutional Court of Hungary has established that privacy protection is also of constitutional value.[3]
As the second step of the test, based on my previous observations, I must wholeheartedly agree that data protection should only be restricted to the most indispensable extent. Because both of these issues are so intertwined and difficult to balance, we could have a new policy specifically for cases where CSAM is sought by looking into personal data. I firmly believe that this solution could be found, but it would require establishing new agencies that specifically deal with aspects of data protection when it comes to cases like this. The prevalence of this material on the Internet also makes it necessary for us to update laws which are about the relationship of privacy and recordings of CSAM.
I cannot think of a better alternative right now than a slight restriction of privacy, even with the added risks. The way things are progressing, with the added weight of the global pandemic, inflation, war and climate change will lead to more children being sold and used for gain on online platforms, which are often untraceable. Are we willing to leave them to their fate in the name of protecting society as a whole from possibly becoming more totalitarian? Are we on our way to losing privacy anyway?
These are all questions for the future generations of thinkers, who may just develop newer technologies and safer practices, which make balancing these two sides of human rights possible. Until then, I kindly advise everyone reading this article to think through the possible consequences of taking action in either direction. Hopefully, on the International Day of Data Protection, I could gauge your interest in a discussion which could lead to concrete answers and new policies all across the EU in the future.
Mónika MERCZ lawyer, is a PhD student in the Doctoral School of Law and Political Sciences at Károli Gáspár University of the Reformed Church, Budapest. A graduate of the University of Miskolc and former Secretary General of ELSA Miskolc, she currently works as a Professional Coordinator at Public Law Center of Mathias Corvinus Collegium. She is a Member of the Editorial Board at Constitutional Discourse blog.
E-mail: monika@condiscourse.com
[1] (3) Everyone shall have the right to the protection of his or her personal data, as well as to access and disseminate data of public interest.
[2] (4) The application of the right to the protection of personal data and to access data of public interest shall be supervised by an independent authority established by a cardinal Act.
[3] Hungarian Constitutional Court Decision 15/2022. (VII. 14.) [24]
by admin | Jan 19, 2023 | Privacy & Data Protection
In our digital economy, privacy has taken center stage. Given that spotlight, we have already seen regulatory intervention into markets with the EU’s GDPR and DMA. (More generally, GDPR and DMA are part of a larger body of regulation that the EU has passed or is contemplating passing to address large platforms. (See Márton Sulyok’s “How to Tackle IT?” published on this blog.) While the verdict is still out, the early empirical evidence strongly suggests that whatever its privacy benefits, the GDPR has had negative economic consequences.
Because the large tech platforms that tend to be in the bullseye of regulators on both sides of the Atlantic give their products away and live off consumer information, a conventional wisdom that has arisen is that market power becomes manifest through degraded privacy protections. In other words, the assertion is that, when platforms have more market power, they lower their privacy quality. Yet, in a recent article, Antitrust & Privacy: It’s Complicated, our empirical results challenge this conventional wisdom. In this blog post, we contribute to the debate surrounding personal data protection that has already been started by Bianka Maksó and Lilla Kiss also on this blog.
Privacy and antitrust have been on a collision course for some time now. For instance, in the U.S., an executive order from the president denounced dominant online platforms for using their market power “to gather intimate personal information that they can exploit for their own advantage.” The chair of the U.S. Federal Trade Commission (FTC) has expressed concern that “[m]onopoly power […] can enable firms to degrade privacy without ramifications.” In the EU, the story is the same. For example, the German Bundeskartellamt brought a case against Facebook based on the theory that violating consumers’ privacy right under the GDPR gave Facebook a data advantage the helped cement its dominant position.
On a superficial level, the negative relationship between privacy quality and market power sounds “right”—after all, we often hear that “if the product is free, then the product is me.” This leads to the following testable hypothesis: if data is the price that we pay for using these free platforms, market power will become manifest through lower levels of privacy.
In our paper, we address this hypothesis both theoretically and empirically. On a theoretical level, equating privacy and price is problematic for several reasons. First, while privacy is a “normal good,” in that, all else equal, consumers prefer more privacy to less, how consumers value privacy relative to other products is uncertain. Specifically, the “Privacy Paradox” suggests that, although consumers profess to care deeply about their privacy in surveys, their revealed behavior suggests otherwise. The root cause of this paradox is the subject of considerable debate. But whether rational choice, asymmetric information, or cognitive biases are to blame is beside the point—if privacy does not drive consumers’ marketplace choices, then privacy is not a relevant dimension of competition.
Second, unlike price, user data is an input into a larger production process to produce some type of output. That is, unlike a monopolist who enjoys increased profits immediately when they exercise market power by reducing the quality of their product (and, hence, the monopolist’s costs), a firm can profit from increased levels of data collection only by taking an action to monetize them. And this monetization process provides benefits, typically through customization of advertisements or services (e.g., recommendation engines in streaming services or bespoke workouts in fitness apps). Thus, the relationship between the collection of user data and consumer welfare is not necessarily negative—again, unlike in the case for price. Finally, contrary to popular opinion, there is no general economic result that establishes a relationship between greater competition and product quality. To the extent that we view privacy as a dimension of quality, the result carries through—there is no a priori reason to assume that competition is more likely to result in better privacy protection than monopoly.
The relationship between competition intensity and privacy quality is further complicated for multisided platforms, which cater to both users and advertisers/sellers. Put simply, while users may value more privacy, advertisers/sellers value less user privacy. A platform balances the competing incentives of these two groups. Moreover, if we consider that users themselves may benefit from more personalized content generated from user data, then the story is even more complicated.
Theory can only take you so far, however. What is happening in the real world? Namely, what is the empirical relationship between market power and privacy quality? Surprisingly, little work has been done to answer this question. We attempt to fill that void with our study. We examined the relationship between various measures of market concentration—an imperfect proxy for market power, but one used by competition authorities throughout the world—and privacy levels for mobile apps on the Google Android platform and popular websites.
For mobile apps, we measure privacy quality using PrivacyGrade.org, which is a third-party assessment of app quality from a group of researchers at Carnegie Mellon University. Our results suggest no relationship exists between privacy grades and our proxies for market power, that is, market shares based on Google Play Store categories, and market concentration, i.e., the Herfindahl-Hirschman Index (HHI). We also find a robust, negative relationship between privacy and app quality ratings, consistent with a tradeoff between privacy and other dimensions of product quality that consumers value.
For websites, we measure privacy quality using DuckDuckGo’s privacy ratings for websites in thirty-seven website categories (e.g., Search, Health, News). These website categories are from SimilarWeb. While these categories do not necessarily correspond with “relevant product markets” used in competition law, they represent independently created grouping of sites that are based on content tags and website self-identification. Again, the results suggest no relationship between privacy ratings and market concentration measures.
Combined, our empirical results cast serious doubt on the validity of the conventional wisdom that firms exercise market power by reducing privacy, and also suggest that app developers use consumer data to enhance the quality of their products. What this means for competition policy is that antitrust law appears to be a poor vehicle to address perceived privacy problems. To the extent that the marketplace is failing to produce optimal levels of privacy, we suggest that consumer protection aimed at increasing consumer access to information and the firms’ ability to credibly commit to higher privacy quality promises is likely to be the better policy tool.
The presumption that privacy and market power are linked is neither supported by theory nor empirics, which suggests that bringing high-profile antitrust cases against large platforms is unlikely to result in higher levels of privacy protection. The relationship between privacy and market power is complicated, and as such, the debate surrounding competition law and privacy could benefit from an injection of both nuanced theoretical considerations and more empirical evidence.
James C. Cooper is Professor of Law and Director, Program on Economics & Privacy, Antonin Scalia Law School, George Mason University; previously served as Deputy Director of Economic Analysis in the Bureau of Consumer Protection, U.S. Federal Trade Commission.
John M. Yun is Associate Professor of Law and Deputy Executive Director, Global Antitrust Institute, Antonin Scalia Law School, George Mason University; previously served as an Acting Deputy Assistant Director in the Bureau of Economics, Antitrust Division, U.S. Federal Trade Commission.