by admin | Oct 2, 2023 | Privacy & Data Protection
When we say the words “data protection”, for most of us, The European Union’s General Data Protection Regulation, GDPR comes to mind. However, there are many different data protection laws from around the world, which I shall attempt to briefly showcase in this post.
First of all, I must point out that data protection has historically always had a huge presence in the continent of Europe, so the fact that the EU now has strict legislation in place to protect privacy is no surprise. The first ever data protection law was Sweden’s Data Act, which was passed in 1973, and came into effect the following year. In 1981, the Council of Europe adopted the Data Protection Convention, rendering the right to privacy a legal imperative. It is important to note that privacy and data protection are not the same, but they are closely intertwined, especially when we talk about the effectiveness of protecting personal data. There were many preparatory documents and various milestones in the EU before GDPR came into effect. Surprisingly though, this is not where the right to privacy first emerged. In fact, that place would be the United States and the year 1890, when two US lawyers, Samuel D. Warren and Louis Brandeis, wrote The Right to Privacy, an article that argued the “right to be left alone”, using the phrase as a definition of privacy. From then on, this right made it into international agreements, and slowly gained popularity, culminating in becoming a crucial aspect of our lives. With the technological advances of AI and other technologies relying on data, privacy has become something precious and fragile. How good of a job does the EU do in protecting it? What about the US?
Currently, when ranking countries by privacy focusing on Internet users’ rights and the Internet privacy laws each country has in place, Estonia, Island and Costa Rica sit at the top, followed by Canada, Georgia and Armenia. Unsurprisingly, China came last in the ranking of 70 different countries: but even China has privacy laws in place. The Personal Information Protection Law (‘PIPL’) entered into effect on 1 November 2021 and is China’s first comprehensive data protection, governing personal information processing activities carried out by entities or individuals within China. Together with this law, the Cybersecurity Law and the Data Security Law were introduced. The PIPL is partly modeled after the GDPR, containing principles of personal information processing, consent and non-consent grounds for processing, but there is no single specific authority in China that has responsibility for the supervision of compliance with personal data related laws.
Similarly modeled after the GDPR is the Privacy Amendment (Notifiable Data Breaches) to Australia’s Privacy Act, Brazil’s Lei Geral de Proteçao de Dados (LGPD), Egypt’s Law on the Protection of Personal Data, and India’s Personal Data Protection Bill. Despite the close resemblance, there are clear differences: for example in India, more discretion is given to India’s Central Government to decide how it is enforced and when exceptions can be made. In Egypt, the fines for non-compliance are significantly lower than GDPR with a minimum of 100,000 LE (approx. 5,560 EUR) and a maximum of 1 million LE (approx. 55,600 EUR), but data breaches could also result in prison time.
New amendments to New Zealand’s 1993 Privacy Act came into effect on December 1, 2020, and similarly to GDPR, there is a requirement to notify authorities and affected parties of data breaches and the introduction of new restrictions to offshore data transfer. However, the fines for non-compliance are significantly lower than with GDPR (the maximum fine is just 10,000 NZD, however there is a mechanism in place for class action suits), and the “right to be forgotten” is not included in the Privacy Act.
These are some of the data protection laws in place which have significant similarities to the GDPR, but seeing that no EU country except for Estonia made it into the ranking of the best countries by Internet users’ privacy, it is worth asking whether GDPR is actually the best regulation out there.
While researching this topic, I have found that 137 out of 194 countries had put in place legislation to secure the protection of data and privacy. The continents of Africa and Asia are at 61 and 57 percent of countries having adopted such legislations. Naturally, some form of legislation is better than no safeguards in relation to privacy, but I think that the most important aspect of any law is not the written word, but how it is enforced in practice. Personally, I believe that the true effect of GDPR does not come from the specific text alone, but rather how it has shaped the way other countries relate to data protection, and how significant the case law has become since data breaches were taken seriously. The laws I briefly mentioned have ever-expanding requirements, new legislation is put in place in several countries (such as Canada’s New Data Privacy Law (CPPA)). The law on data protection might be completely different within a country, like in the case of the US, where while there are no formal laws at the federal level, there is some federal legislation that protects data on a more general level. Knowing that it might restrict competitiveness for businesses, the US typically does not have strict laws in place. Several US states have created their own laws, with California’s California Consumer Privacy Act (CCPA) providing privacy rights and consumer protection, which allows for residents of the state to establish precisely how their personal data is being collected and what it is being used for. The New York Privacy Act obligates companies to acquire consumer’s consent, disclose their de-identification processes, and install controls and safeguards to protect personal information. There are laws in place in Colorado, Connecticut and Virginia, with bills introduced in Utah, Indiana, Iowa, Montana, Oregon, Tennessee and Texas. While there had been a EU-US Privacy Shield framework in place to make GDPR compliance more understandable for organizations operating on both sides of the Atlantic, the agreement was struck down by the European Court of Justice, as they were of the opinion that that the rights of EU data subjects were not adequately protected from US surveillance.
Data protection is a national security issue, so it is understandable that different nations might feel apprehensive about data flow. But we must understand that we are living in a world that is so interconnected that simply creating data protection laws will never be enough to actually make sure there is no misuse or data breaches. But is cooperation possible on an international level in such a sensitive matter? Experts have previously made a case for a global privacy standard, which would be easier on data protection officers and authorities, stating that “while the European Data Protection Board has provided guidance about adequacy thresholds, each company’s risk assessment necessarily will be subjective and result in inconsistent application of the GDPR’s data privacy scheme.”. There is a data privacy international treaty in place, which is wholly ineffective: this leads me back to my point about the importance of implementation when it comes to any regulation. As long as different nations have diverging interests – which will always be the case – an international data protection treaty seems far away. For the purpose of business many countries attempt to comply with the GDPR, which forced its way into the consciousness of the international committee, but is still often ignored by those companies which are powerful enough to pay a fine and not change their lucrative practice of selling personal data.
So what is the solution? Can we find any common ground in relation to privacy laws from around the world, especially with the emergence of newer technologies and AI legislation also taking precedence worldwide? Or will we just keep trying to comply with differing regulations until one day we find that privacy has vanished altogether – if it hasn’t already?
Only time will tell what this possibility means for the future of data protection, but one thing is for sure: privacy laws became more significant in the eyes of world leaders through legislative effort from the EU, and are here to stay. Let’s hope that something similar will happen with regard to Artificial Intelligence, so that we may have an imperfect, but slightly safer future.
Mónika Mercz, JD, is specialized in English legal translation, Junior Researcher at the Public Law Center of Mathias Corvinus Collegium Foundation in Budapest while completing a PhD in Law and Political Sciences at the Károli Gáspár University of the Reformed Church in Budapest, Hungary. Mónika’s past and present research focuses on constitutional identity in EU Member States, with specific focus on essential state functions, data protection aspects of DNA testing, environment protection, children’s rights and Artificial Intelligence.
Email: mercz.monika@mcc.hu
by admin | Sep 29, 2023 | Privacy & Data Protection
Understanding the complexities of the media landscape is absolutely necessary in this day and age when information is everywhere and can be obtained quickly at the tip of our fingers. One of these ideas that deserves consideration is the idea of pluralism in the media. At its most fundamental level, media pluralism refers to the wide variety of media sources and voices that are made available to, and are within reach of the general public. It ensures that various societal groups, whether they are based on ethnicity, ideology, or interest, have a platform to express their views and that no voice or perspective becomes overwhelmingly dominant. These goals can be accomplished by ensuring that there are multiple platforms.
Pluralism in the media serves as a hedge against the formation of potential media monopolies, which occur when a single organization or a small group of organizations control the entirety of the information flow. When such monopolies are allowed to continue operating unchecked, they can result in a skewed presentation of events, issues, or narratives, which in turn can influence public opinion in ways that are biased. Pluralism in the media helps to cultivate a more democratic environment by ensuring that a wide variety of voices are heard. This provides citizens with a broader perspective and a more comprehensive understanding of the issues at hand. In the past, there were not as many sources of media. The primary sources of information dissemination at the time were printed newspapers, radio broadcasts, and, later, television channels. These mediums had high entry barriers, and often required a significant investment of capital and resources to get started. Because of this limitation, the media landscape was frequently dominated by a relatively small number of voices, which typically reflected the more powerful or affluent segments of society.
Nevertheless, the introduction of the internet and other digital technologies brought about a dramatic shift in this landscape: all of a sudden, the barriers to entry decreased. Everyone who has access to the internet has the potential to become a broadcaster, journalist, or influential person. Emerging media such as blogs, podcasts, and personal video channels each offer distinctive points of view and cater to specific subsets of audience members. The rise of digital technology has bolstered the concept of media pluralism, making it possible for even the most underrepresented groups to have their voices heard.
However, the expansion of the digital realm brings new challenges to the concept of pluralism. Platforms that are driven by algorithms, such as social media websites, frequently prioritize content in accordance with the preferences and actions of users. Echo chambers are situations in which individuals are primarily exposed to viewpoints and ideas that are congruent with their existing beliefs. While this ensures a tailored user experience, it also carries the risk of creating echo chambers. This has the potential to accidentally restrict the breadth of information they receive, which runs counter to the very concept of pluralism in the media.
In addition, although the digital space makes it possible for a variety of voices to be heard, it also raises questions about the credibility and authenticity of those voices. It is essential to be able to differentiate between fact and fiction in today’s oversaturated media environment. In this setting, pluralism necessitates not only the participation of many different voices but also a dedication to maintaining the truth and adhering to certain journalistic standards. As the distinctions between fact and fiction become increasingly hazy, the question of whether or not the state should play a role in keeping its citizens informed becomes the subject of a heated debate.
The central focus of this discussion revolves around the notion of online pluralism. The internet provides a medium through which individuals, including professional journalists, bloggers, and casual users, can disseminate their thoughts and perspectives. The process of democratizing the dissemination of information has led to the emergence of a wide range of perspectives, encompassing both significant and inconsequential viewpoints, coexisting in a vast expanse. While the event serves as a commemoration of the fundamental right to freely express oneself, it also presents a precarious situation where misinformation can easily proliferate. Given the magnitude of this challenge, the significance of the state’s role becomes paramount. One perspective posits that it is imperative for states to intervene and implement regulatory measures pertaining to online content to guarantee that their populace is exposed to information that is both accurate and balanced. This intervention may take on diverse manifestations, such as the implementation of regulations targeting tech giants to mitigate the spread of misinformation, the establishment of state-sponsored digital literacy programs, or the creation of state-produced content with the objective of achieving a more balanced information landscape. Nevertheless, the implementation of state intervention presents a distinct array of challenges. The demarcation between regulation and censorship warrants consideration. The state’s conceptualization of “truth” may not consistently exhibit impartiality. Efforts to manipulate the prevailing narrative may result in the suppression of authentic dissenting perspectives, thereby impeding the fundamental nature of democratic dialogue.
The provisions of the Digital Services Act relating to the role of very large online platforms in the news media are of particular interest. For example, providers of very large online platforms should “pay particular attention to freedom of expression and information, including the freedom and pluralism of the media and identify and address systematic risks that jeopardise media freedom and pluralism.” Those provisions, therefore, require platforms to preserve and guarantee media pluralism, thereby providing, inter alia, the means to achieve a more balanced mass media. This proposed solution is based on the principle that it is possible to reduce polarization through diverse content and that filter bubbles can be neutralized by introducing various types of news into them. However, given the bias of the traditional media, the overall picture becomes more complicated, and the above conclusion may not hold. This ignores the phenomenon of the so-called hostile news bias, which suggests that users reject otherwise objective and neutral information that contradicts their worldview and see it as hostile news.[1] A study published by an American research group examined how users react to opposing opinions on Twitter. If the test subjects were Republicans, Twitter recommended a multitude of Democratic content, and vice versa. After examining the attitudes before and after, they found that as a result of the messages from the opposite side, the users’ attitudes became more and more polarized, and they strengthened much more deeply in their original ideas.[2]
The issue of diversity of information is also a regular topic of debate in traditional media, and the practical problems associated with it are magnified in the online space. How can diversity be measured? What are the indicators that help to determine whether an online platform’s service meets the conditions for diverse mass media? What exactly is diversity in the online space? Obviously, it is impossible to expect an online platform to balance the content presented to each user on a pharmaceutical scale along the lines of political bias, but beyond the aforementioned neutrality, what are the positive desiderata that we can expect from platforms? And then there is the next big question: the question of the verifiability of the measures taken. Indeed, online service providers and advertisers have integrated personalized technologies into the user experience to such an extent that we have lost the possibility of objectively verifying the information presented to them. While everyone sees the same thing in a traditional media stream — including, where appropriate, the authorities that monitor the area — the mix of content on social media is different for each user, making it impossible to have a centralized, one-size-fits-all content monitoring system. We have now entered the age of the “networked information economy”, where information is usually produced in a decentralized way by users, and because it is free, a very large number of people take advantage of this possibility, making it almost impossible to control what is displayed and how it is displayed for each user.
The algorithmic nature of many platforms can, ironically, hinder internal pluralism. For example, a social media site, aiming to tailor content to a user’s preference, might end up showcasing only a narrow band of perspectives, thereby reducing exposure to diverse views within that platform. On the external front, while the internet has lowered barriers to entry, leading to a proliferation of content creators, the dominance of a few tech giants can overshadow smaller, independent entities, making it harder for them to gain visibility.
Moreover, the sheer volume of information in online media, while a testament to external pluralism, can also blur the lines between credible journalism and misinformation. The challenge, then, is not just to promote pluralism but also to ensure that it is anchored in accuracy and reliability.
[1] Erin Carroll: Making News: Balancing Newsworthiness and Privacy in the Age of Algorithms. 106(1) Georgetown Law Journal 69-114. (2017). p. 72.
[2] Cristopher Bail et. al.: Exposure to opposing views on social media can increase political polar-ization. Proceedings of the National Academy of Science 115(1) 9216–9221 (2018)
János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.
by admin | Sep 27, 2023 | General, Privacy & Data Protection
As we discussed in our previous post, at its inception, Section 230 was seen as a boon for the internet. It protected burgeoning platforms from a potential onslaught of litigation. Without such protections, these platforms might have been wary of allowing user-generated content, fearing lawsuits at every turn. Given the volume of posts, comments, and shares, it would have been an insurmountable task for platforms to vet every piece of content for potential liability. Thus, Section 230 provided the shield necessary for these platforms to grow and for the internet to flourish as a space for open discourse. However, the very protections that spurred the growth of these platforms have now become a double-edged sword. As these platforms have evolved into influential giants, so too have the complexities of the content they host. Misinformation, hate speech, and divisive or incendiary content have become commonplace. The once-celebrated virtual town squares now carry the potential to distort public perceptions, fuel societal divisions, and even sway elections.
Given these challenges, the call for regulation is understandable. However, the U.S. government’s hands are tied, to a large extent, by Section 230. Any attempts to hold platforms accountable for user-generated content run into the protective wall of this statute. For instance, if a piece of false information is propagated on a platform leading to real-world harm, the platform remains shielded from any liability due to Section 230. This makes it challenging to incentivize platforms to be proactive in managing and moderating content. Every move towards oversight must be measured against the right to freedom of speech. There’s a fine line between curbing harmful content and stirring genuine discourse. Additionally, the global nature of these platforms means that regulations in the U.S. might have implications worldwide, or alternatively, global content can impact U.S. users, complicating the jurisdictional scope.
Moreover, Section 230 blurs the lines between a platform and a publisher. Traditional media entities, like newspapers or television networks, are held to strict standards of accuracy and can be liable for spreading false information. In contrast, social media platforms, while influencing public opinion just as potently, if not more, escape these responsibilities. They enjoy the vast reach and influence of publishers without the accompanying accountability. The dichotomy of Section 230 becomes even starker when one considers the algorithmic nature of these platforms. While they might not create content, they undoubtedly influence its reach. Algorithms decide which content is highlighted on user feeds, potentially amplifying some voices while muting others. This curatorial role is akin to editorial decisions in traditional media, yet the platforms remain absolved of the responsibilities that accompany such power.
Because of Section 230’s protection, social media companies have been largely free to develop their own content moderation policies without fear of legal repercussions. If these platforms decide to remove content or leave it up, Section 230 protects their decisions either way. This autonomy has made it difficult for regulatory attempts that aim to hold platforms accountable for user-generated content or misinformation. Furthermore, any government-led effort to mandate specific moderation practices could run into First Amendment challenges. Section 230 allows platforms to navigate the tension between open forums and moderating content without becoming entangled in consistent legal battles.
A recent decision by a federal appeals court that has eased some restrictions on the Biden administration’s interactions with social media companies. The court determined that the White House, the FBI, and top health officials cannot coerce or significantly push social media companies to remove content deemed as misinformation by the administration, particularly related to COVID-19. Nevertheless, the ruling did narrow an injunction by a Louisiana judge that previously prevented the administration from any communication with social media firms. This injunction will remain in place for the White House, the FBI, the CDC, and the surgeon general, but will not affect other federal officials. The court has allowed the administration a period of 10 days to seek a review from the U.S. Supreme Court. This case originated from two lawsuits, one by a group of doctors and another by a conservative nonprofit organization. Both accused the administration of infringing upon their free speech rights by pressuring social media platforms to censor their content.
Addressing the challenges posed by Section 230 is not straightforward. Repealing it entirely could stifle free speech, as platforms, fearing litigation, might opt for excessive censorship. On the other hand, letting it stand in its current form allows platforms to sidestep the broader societal responsibilities. There’s also a concern about the potential impact on smaller platforms or startups, which might lack the resources for extensive content moderation. Without the protections of Section 230, they could be exposed to debilitating lawsuits. Therefore, regulatory measures that would place more responsibility on platforms for user content have to grapple with the broad immunity granted by Section 230. This isn’t to say that social media platforms can’t be regulated at all, but Section 230 does present a significant hurdle for legislators and policymakers looking to place greater accountability on these companies for the vast amount of content circulating on their platforms.
Section 230, while foundational in shaping the internet we know today, has become a significant roadblock in the path of meaningful regulation of social media platforms. As society grapples with the influence and impact of these platforms, a nuanced reconsideration of Section 230 is imperative. Striking a balance will be complex but essential to ensure that the digital spaces remain open for expression while being safeguarded against their potential detrimental impacts. It’s a testament to the evolving nature of technology and society, where laws once seen as catalysts can become impediments, necessitating reflection and reform.
János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.
by admin | Sep 26, 2023 | Privacy & Data Protection
In the age of digital interconnectedness, the power and influence of social media platforms have become undeniable. These platforms, initially conceived as networks for friends and family to connect, have transformed into global public squares where news is disseminated, opinions are forged, and movements are born. With this massive influence, there’s a rising call in the United States for regulatory measures on social media. But how does a nation built on the principles of free speech and open discourse strike a balance between oversight and freedom? This is a particularly interesting question in a country where the First Amendment of the Constitution guarantees freedom of speech to all citizens since 1791, stating that “Congress shall make no law (…) abridging the freedom of speech or of the press”. Historically, the U.S. approach to media has been one of minimal intervention, but the digital age, with its unique challenges, has nudged the U.S. to rethink its stance.
In the early 1990s, the Internet began to become more widespread, with the emergence of sites offering forums, message boards, and other services based on user-generated content. While this helped to promote the use of the Internet, it also led to a number of situations in which the courts had to decide whether service providers could be held liable for user-generated content. In relation to literary publications, under the US defamation tort, distributors are not liable for the books they sell unless they have clear knowledge of the infringing content and are under no obligation to screen the books they sell for this purpose. Courts started to address the liability of internet intermediary providers for infringing content posted on their sites in the 1990s, and most decisions have used this analogy, but there have also been a number of contrary rulings, which have found the intermediary provider liable if it has applied moderation on its site and failed to remove the infringing content (regardless of whether it was aware of it or not). Thus, the strange situation is that if the provider does not carry out any moderation activity on the site, it is not liable for the content posted there by third parties, whereas if it wishes to proactively moderate the infringing content but does not remove some of it, it may be held liable.
Although the rules on the scope of freedom of expression and the strength of the defense apply to “speech” on the internet, the specific nature of internet communication has given rise to a number of new questions or questions that have already been answered. The first truly significant regulation on the issue of the liability of digital platforms for user-generated content is Section 230 of the Communication Decency Act (CDA230), which is part of the US Telecommunications Act of 1996. This section provided Internet operators with a degree of immunity (from the very beginning of their emergence, in fact) that allowed the digital economy in the US to flourish and ushered in a new era of Internet communication and thus freedom of speech. However, since the adoption of this section, the way digital platforms operate has changed significantly and poses new challenges to the regulatory environment.
Enacted in 1996, Section 230 was designed with the intent of fostering a nascent internet. The statute provides immunity to “interactive computer services” from being treated as the publisher or speaker of any information provided by another content provider. In simpler terms, platforms like Facebook or Twitter cannot be held liable for most of the content that their users post. The primary purpose of Section 230 is to allow interactive computer service providers to restrict the display of sexual or violent content without being held liable. The statute’s enacting clause also refers to this purpose, saying the legislature wants to remove “disincentives that impede the development and use of blocking and filtering technologies that allow parents to restrict their children’s access to objectionable or inappropriate online content”. The circumstances surrounding the passage of the law also suggest that Congress did not intend to provide complete immunity, but merely to resolve the controversy. In eliminating a practice similar to the previous Catch-22 (i.e., that only those who voluntarily moderate content can be held liable), Congress states that no action for removal can be brought against the provider, thus forcing them to moderate the content, and the goal was to encourage moderation, not to establish complete immunity. In enacting the law, Congress was guided by the principle of promoting the continued and vibrant development of interactive computer services with as little government regulation as possible. According to the “Findings” section of the Act, the growth of the market for interactive services has “represented an extraordinary advance in access to educational and information resources for the citizens of the United States” and “the Internet and other interactive computer services have created a forum for a real diversity of political discourse, have provided a unique opportunity for cultural development, and have multiplied the possibilities for intellectual activity infinitely”. The rationale for introducing the legislation was also to promote the growth of political discourse and services on the Internet.
The CDA is considered by many scholars to be the legal foundation of the Internet today, one of the most important pieces of legislation protecting online speech, a highly decisive rule that can protect the Internet’s soaring growth. The legislation was born at the dawn of Internet exceptionalism, proclaiming as a flagship of the new approach that “the Internet is different.” The Internet, built on these 26 words, as Kosseff put it, had the potential to democratize communication itself for individuals, giving them the freedom to exchange ideas directly with each other in online forums and the ability to create an unprecedented economic boom, a vibrant and competitive free market in the United States of America. The special regulation that protects digital platforms, and the faltering faith in their proper functioning, is due to the fact that the role of digital platforms has now gone beyond the basic services they provide, and critics argue that it is not capable of addressing some of the harms that can arise from a platform-based economy.
Section 230 gives platform operators immunity both in relation to content uploaded by users and in relation to their moderation activities. Thus, the platform operator cannot be held liable for infringing content uploaded by users, nor can it be held liable for removing any content. As social networking sites have been consistently held by US courts in a number of decisions to be service providers under Section 230, social networking platforms have the right to moderate (“censor”) the content on their services in any way they wish in order to maintain the environment they wish. In fact, they can remove any content, leave any content untouched, in the knowledge that they have almost no liability for infringing content.
The regulation does not discriminate on the basis of platform size, meaning that immunity applies to both small blogs and giant platforms with billions of users. Of course, the challenges that the emergence of social networking sites could pose to Section 230 could not have been foreseen when the legislation was drafted. As social networking sites are where most internet users get most of their information, these social media platforms have become an almost inescapable community space, with rules, policies, and ways of operating that have a huge influence on both social and private users. One of the consequences of the broad immunity granted to social networking sites is a phenomenon known as “collateral speech restrictions”. It is much easier — and cheaper — for a platform to use its Section 230 immunity to remove all risky posts than to engage in serious public relations and communications battles to ensure that the First Amendment is enforced unconditionally. They are not legally liable under Section 230 for offensive content posted on their platforms or even terrorist propaganda. The California Federal Court ruled in Fields v. Twitter that platform providers cannot be held liable even if terrorists use their sites to spread propaganda.
In short, while Section 230 shields online platforms from liability for most third-party content, it also allows them the discretion to moderate and remove content without facing legal repercussions. This dual-edged nature of Section 230 means that, on one hand, platforms can foster diverse online discourse, but on the other, they can also unilaterally decide what content is permissible. As concerns about misinformation, online extremism, and tech monopolies grow, Section 230 finds itself at the crossroads of discussions about the future of internet regulation and the balance between fostering innovation and ensuring accountability.
János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.
by admin | Sep 7, 2023 | Privacy & Data Protection
We have previously discussed the dangers of at-home DNA testing on this platform, but a crucial aspect of the existence of DNA databases in the hands of foreign companies is the potential threat to national security of a state. In the constant fight between huge platforms and states, this phenomenon might just become a key player of the future.
Firstly, I would like to briefly call back to why these databases can create problems in the first place, and that can be summed up in a few words: the lack of privacy protection. With the emergence of DNA testing companies a new era came about, where everybody got curious and wished to know more about their ethnic background (heritage), their possible lost relatives and even their health status. Naturally, this culminated in both some unfortunate incidents and positive consequences (with people having their sensitive information, their very own DNA sold to pharmaceutical companies, and decade-old crimes being solved, respectively). The sad truth of all outcomes, however, is that from now on every individual that has chosen to participate in this trend and gave away their genetic material has become known to the owners of these databases. Another facet of this discourse is that the people whose DNA is in the hands of these companies did not only provide information about themselves, but their descendants, ancestors and many relatives as well. Anonymity cannot be fully achieved in an environment where the data can be reidentified so fast and can be interconnected with such ease as in this case. It is thus not without reason that the Hungarian National Authority for Data Protection and Freedom of Information has issued a statement about the possible negative consequences of handing out our DNA sample as they find the protection provided by the companies to be lacking.
But having seen many sides of this issue, we must take the next step and see the bigger picture: how do these existing huge databases relate to the national security of a state, which is explicitly considered an ‘essential state function’? What issues does it raise if individuals from around the world willingly give over their genetic material to privately owned companies? Considering the fact that the two biggest DNA testing companies, Ancestry and 23andMe are based in the US, I will look at the situation there as well as in the European Union, where the debate about essential state functions, especially with regard to national security is on-going, as will be presented below.
Before discussing any of the above-mentioned topics, we must define what national security is. Comprehensive national security should incorporate political, economic, environmental, and societal approaches along with its traditional military approach. National security is seen as a prevention of values and standards from the threat in the present and future, including human and non-human both and narrow policy options which can affect the quality of life for its nationals.
In the context of the EU, the Treaty on European Union Article 4(2) names national security as an essential state function of Member States, which means that it remains the sole responsibility of each Member State. There are many aspects of a nation’s security:
- a nation’s possession of control of its sovereignty and destiny,
- (un)used military capacity and the capabilities of the armed forces, and
- homeland security and the fight against terrorism.
Non-military ideas of national security also include political and economic security, energy and natural resources security, cybersecurity and many more. Cybersecurity in particular is interesting, seeing that with the emergence of new technologies, profiling, a digital dictatorship or other bleak futures are more of a possibility. This facet of national security refers to protection of the government’s and the peoples’ computer and data processing infrastructure and operating systems from harmful interference, whether from outside or inside the country.
Sadly, processing sensitive data is not in the hands of the countries in this respect. The US is quite different in its approach towards data protection and its importance when it comes to their security, seeing that within days of birth, nearly all infants born in America are compelled to give their DNA to the government (for mandatory disease screenings), which has been used by law enforcement before, the same way that privately owned companies – despite claiming in their data protection statements that they are not willing to cooperate – have given samples to police before. What the US decides to do with their own database is completely their right, but seeing what these companies (which are all-to-willing not to take data protection concerns seriously) are doing with the customers’ DNA samples, and who they are possibly selling it to is indeed a dangerous game.
National security can be interpreted as the preservation of the norms, rules, institutions and values of society, and has been described as the ability of a state to cater for the protection and defense of its citizenry. However, in all interpretations, the breach of data protection to such a degree can only be read as a threat and a concern which governments should take issue with.
This matter is especially urgent knowing that there are now AI technologies which are able to unlock custom-tailored, rare DNA sequences, and government officials and companies are even increasingly turning to technologies like DNA tracking, artificial intelligence and blockchains to try to trace raw materials from the source to the store. Researchers also found that it was possible to extract someone’s sensitive genetic markers and create fake profiles of individuals apparently related to a target by making a relatively small number of DNA comparisons.
These technological advances will soon become so momentous that they will reshape how DNA can be used. Because we are talking about future occurrences, we cannot be sure about how exactly misuses could happen, but we do have some ideas. Certain companies could share genetic information with others, not just for research purposes, but also to decide whether someone can be insured, what kind of healthcare or cosmetic procedures should be advertised for them, what kind of cultural influence should be inflicted upon them, they could very easily be controlled or discriminated against based on genetic background, and many more. The use of DNA combined with AI could be akin to a weapon never previously seen before. But what kind of response should countries give when faced with such a threat to national security?
The first thing to note is that this response could have more of an effect on the future of the EU and integration than we can foresee. On one hand, as presented above, national security is considered an essential state function, which is strongly related to the concept of stronger Member States, cooperation rather than full-on unity and the respect of sovereignty. On the other hand, the real practical question is whether a single Member State is strong enough to affect any change in the policy of these huge companies – and sadly, the answer is most likely no. This leads to a conflict of interests, where a national security question either becomes a part of global security or this facet of security becomes embedded into an EU policy. Currently there are fears related to misuse of DNA databases in many countries, but not much action has been taken in the matter.
With the US having a governmental database and most privately owned companies being based in the country, the issue’s severe nature turns towards other states. I have already mentioned how the EU has a dichotomy in how to even approach the situation because of the delicate nature of integration at the moment, but other areas of the world do not fare well with taking any action either. Experts now agree that direct-to-consumer DNA testing is a risk, and many choose not to use these services anymore, but the privacy concerns are ever-present.
Very few countries have adopted specific legislation relating to genetic testing, the ones having done so being Austria, Switzerland, Germany and Portugal, and increasing numbers of “genetic privacy” laws have been passed in the US in recent years as well. The UK has recently reported that due to the Human Genetics Commission responsible for providing the British government with expert advice giving their opinion on the matter, the UK now considers that it would not be desirable to ban tests bought over the Internet, simply because it would be impossible to police such a ban given the freedom of access to the Internet. I would personally agree that a ban could never be effective, only strict genetic privacy laws, great fines and a comprehensive framework of action by various countries could result in any change.
Because I think that these unpoliced, privately owned databases pontentially pose some of the biggest threats to the future national security of Member States, I would advise setting up new bodies in Member States, which would all help enact the specific countries’ legislation and decisions. These bodies could be operating under a bigger EU Board, which could enact change more effectively, hopefully minimizing the misuse of sensitive personal data and resulting in states being able to provide some form of protection for their citizens in this sense as well.
Mónika Mercz, JD, is specialized in English legal translation, Junior Researcher at the Public Law Center of Mathias Corvinus Collegium in Budapest while completing a PhD in Law and Political Sciences at the Károli Gáspár University of the Reformed Church in Budapest, Hungary. Mónika’s past and present research focuses on constitutional identity in EU Member States, with specific focus on essential state functions, data protection aspects of DNA testing, environment protection, children’s rights and Artificial Intelligence.
Email: mercz.monika@mcc.hu
by admin | Sep 5, 2023 | Privacy & Data Protection
The European Union (EU) is not just an economic bloc but also a beacon of shared principles such as democracy, human rights, and the rule of law. These values underpin the EU’s approach to media regulation, including outlets from third countries. The EU has established a legal and regulatory framework that seeks to foster media pluralism, protect freedom of expression, and guard against disinformation.
EU media regulation draws on a variety of legal instruments. The Audiovisual Media Services Directive (AVMSD) is a key piece of legislation that regulates broadcast and on-demand media services in Europe. The AVMSD aims to create a single European market for audiovisual media while ensuring cultural diversity, the protection of minors, and the promotion of European works. Importantly, the directive applies to media service providers who are established within EU Member States, but it also impacts third countries’ outlets seeking to operate in the region. Third countries’ media outlets, under AVMSD, must adhere to the ‘country of origin’ principle, meaning they have to follow the rules of the EU country they broadcast from. This ensures uniformity and fairness but has also been the subject of criticism. Some argue that it enables ‘forum shopping’, where outlets base their operations in countries with lighter regulations.
Furthermore, the EU has been tightening its regulatory grip in response to concerns over disinformation and foreign interference. The Code of Practice on Disinformation is a self-regulatory framework agreed upon by tech and media companies. Its primary aim is to enhance transparency of political advertising and reduce the spread of online disinformation. Even though it is primarily targeted at digital platforms, its scope extends indirectly to third countries’ media outlets.
Recently, there have been calls to extend EU jurisdiction to third countries’ media outlets that target EU citizens. Critics argue that current regulations don’t adequately protect against foreign disinformation campaigns.
The European Democracy Action Plan, announced at the end of 2020, is a framework developed by the European Commission to strengthen democracy in the European Union. The plan aims, among other things, to ensure the integrity of elections and political advertising, increase transparency, support independent journalism and improve the EU’s ability to detect and respond to disinformation. As part of the Action Plan, Ursula von der Leyen announced the European Media Freedom Act (EMFA) 2021, which builds on the Audiovisual Media Services Directive and sets out different rules on the independence of media regulators, promotes transparency in media ownership and strengthens the independence of editorial decisions. The initiative focuses on removing obstacles to the creation and operation of media services and aims to establish a common framework to promote the internal market in the media sector, with a view to safeguarding media freedom and pluralism in this market. The draft is scheduled for adoption by the end of 2023, and at the time of writing the trilogue negotiations are ongoing.
The current Article 16 of the proposal deals with the coordination of measures for media services outside the EU and provides that: “The Board shall, upon request of the national regulatory authorities or bodies from at least two Member States, coordinate relevant measures by the national regulatory authorities or bodies concerned, related to the dissemination of or access to media services originating from outside the Union or provided by media service providers established outside the Union that, irrespective of their means of distribution or access, target or reach audiences in the Union where, inter alia in view of the control that may be exercised by third countries over them, such media services prejudice or present a serious and grave risk of prejudice to public security.”
The recitals to the Article highlight the specific task of media authorities to protect the internal market from activities of media services from outside the Union that target or reach audiences within the Union. “Such risks could take, for instance, the form of systematic, international campaigns of media manipulation and distortion of facts in view of destabilizing the Union as a whole or particular Member States. In this regard, the coordination between national regulatory authorities or bodies to face together possible public security […] threats stemming from such media services needs to be strengthened”. (EMFA Recital 30) To this end, the legislation aims, according to the preamble, to coordinate the national measures that can be adopted to counter threats to public security posed by media services originating or established outside the EU but aimed at an EU audience. To this end, the legislation proposes the establishment of a list of criteria, to be drawn up by the European Board for Media Services (to be set up by the EMFA). “Such a list would help national regulatory authorities or bodies in situations when a relevant media service provider seeks jurisdiction in a Member State, or when a media service provider already under the jurisdiction of a Member State, appears to pose serious and grave risks to public security. Elements to be covered in such a list could concern, inter alia, ownership, management, financing structures, editorial independence from third countries or adherence to a co-regulatory or self-regulatory mechanism governing editorial standards in one or more Member States.” (EMFA Recital 30b)
This part of the regulation was clearly brought into being by the Russian-Ukrainian conflict. On 1 March 2022, the Council of the European Union adopted a Council Regulation imposing restrictions on the operation and broadcasting in the European Union of certain Russian media outlets linked to the state, in response to Russia’s hybrid warfare. It stipulated that operators are prohibited from broadcasting or allowing the broadcasting of content from Russia Today and any associated service provider, including “transmission or distribution via cable, satellite, IPTV, ISPs, Internet video-sharing platforms or applications”. The suspension of the channel has sparked a major debate among journalistic organizations, as well as lawyers and experts.
The Digital Services Act (DSA) also requires the development of crisis response mechanisms. It defines a crisis as a situation where “exceptional circumstances arise which could lead to a serious threat to public security or public health in the Union or a substantial part of it.” (DSA Art. 36.) In such situations, the Commission may adopt decisions requiring online platforms to take measures such as assessing the threats posed by the operation of their services and taking proportionate, concrete, and effective measures to prevent, eliminate or limit them. The DSA, while not specifically targeting media outlets, could have implications for third countries’ outlets that provide digital services within the EU. The DSA places significant emphasis on creating transparency obligations for digital services, especially those classed as ‘Very Large Online Platforms’ (VLOPs), with more than 45 million users.m (DSA Art. 33.) This includes requirements for clear reporting of their content moderation policies, measures against illegal content, and robust advertising transparency. This is particularly crucial in managing propaganda, as it can often be disguised as legitimate advertising or user content. These new regulations will make it harder for foreign entities to manipulate the digital information space and would increase the traceability of such activities. Additionally, the DSA stipulates that VLOPs must conduct annual audits to assess systemic risks associated with their platform, including the dissemination of illegal content, negative effects on fundamental rights, and intentional manipulation of the platform. They must also assess and mitigate some of the risks arising from the design and use of their services. This includes, according to the Regulation, “any actual or foreseeable negative impact on civil discourse and on the electoral process and public security”. (DSA Art. 34.) While the DSA does not explicitly target propaganda from outside the EU, it creates a comprehensive framework to increase transparency and accountability of digital platforms, making it much more difficult for any entity, foreign or domestic, to use these platforms for propagandistic purposes. Moreover, the DSA would require platforms to provide researchers with access to key data to understand and mitigate risks associated with the dissemination of disinformation, which will support efforts to combat foreign propaganda. (DSA Art. 40.)
In conclusion, while the EU’s regulation of third countries’ media outlets is grounded in its legal and regulatory framework, it’s also subject to ongoing debates. The balance between fostering media pluralism, protecting citizens, and mitigating disinformation is a complex and evolving challenge. The EU’s future steps in this area will be watched closely by regulators and media outlets alike.
János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.