Submission
Privacy Policy
Code of Ethics
Newsletter

Individual Autonomy in the World of AI?—How Long Can Fundamental Rights Offer Protections in the Face of Evolving AI

This article lays out an analysis of the key points regarding privacy and the individual autonomy constitutionally anchored to it in the digital age. The topic is also relevant in the discourse on free speech through the intersections of law and AI, affecting both individual privacy and freedom of expression, some aspects of which will be mentioned.

Recently, the Council of the EU approved a precedent-setting law aiming to harmonize rules on AI, the so-called AI Act. The Act aims to follow a risk-based approach, the first of its kind. The categories in which AI systems are placed will be known as unacceptable risk, high risk, limited risk, and minimal risk. AI systems presenting only limited risk would be subject to very light transparency obligations, while high-risk AI systems, such as systems that perform narrow procedural tasks, improve the result of a previously completed human activity, detect decision-making patterns, or perform preparatory tasks to an assessment. These high-risk AI systems are required to be authorized by the developers and are subject to a set of requirements and obligations to gain access to the EU market. AI systems such as cognitive behavioral manipulation and social scoring will be banned from the EU because their risk is deemed unacceptable. The law also prohibits the use of AI for predictive policing based on profiling and systems that use biometric data to categorize people according to specific categories such as race, religion, or sexual orientation. The parallel is set based on the high-risk approach: the higher the risk of causing harm to society the stricter the rules. For law-making bodies, this is a significant step in harmonization, as this precedent may set the standard for global AI regulations to follow. With the dynamic and ever-evolving nature of AI models, harmonization and precedent-setting cases are the future of this newly found branch of law in the Member States.

The AI Act aims to ensure the safety and respect of all EU citizens and stimulate innovation in Europe; however, scientists and innovators fear the static attitude towards lawmaking cannot keep up with the dynamic nature of AI. To put this into an everyday example, a few years ago AI was still generating pictures that were easily detectable as false, yet today, with the recently emerged deep-fake phenomenon, politicians and civilians are continuously under attack. With this in mind, the question arises: when will AI turn on human rights, which are embedded in our individual autonomy and constitutionally tied to privacy and related rights?

The Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR) clearly restates the following in the Universal Declaration of Human Rights: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

To put this into perspective, if anyone expresses any sort of opinion via social media platforms (liking something, leaving a comment, searching using keywords), they can be potentially subject to surveillance by algorithms and AI used by these sites, enabling the machine to store information about them for an extended and uncertain period of time—without the user’s knowledge.

This means that the rights that could be held by both groups of subjects mentioned above include freedom of expression and seeking and imparting information through media. However, when AI-based questions are asked regarding the future, lawmakers are silent.

According to Article I of the Hungarian Fundamental Law, the State has a primary duty to respect and protect the inviolable and inalienable fundamental rights of individuals. Hungary also acknowledges both individual and collective rights, and these rights, as well as corresponding obligations, are detailed by law. Any restrictions on fundamental rights must be proportionate and necessary, aiming to safeguard other fundamental rights or constitutional values, while ensuring that the essential content of the right is upheld. Moreover, certain rights and obligations also extend to legal entities, as defined by law.

This means that the rights that could be held by both groups of subjects mentioned above include freedom of expression and seeking and imparting information through media. However, concerning the right to privacy, AI will also have a great effect. Brad Smith, President of Microsoft, remarked already in 2018 that “[…] technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses. Humans will not have agency and control in any way if they are not given the tools to make it happen.”

One way in which AI is already impacting privacy is via Intelligent Personal Assistants (IPA) such as Amazon’s Echo, Google’s Home, and Apple’s Siri. Voice-activated devices are capable of observing ethical rules created for artificial intelligence when dealing with issues and learning about the interests and behavior of their users. So how private is private life? Modern technology is now at the stage where long-term records can be kept on anyone who produces storable data—anyone with bills, contracts, digital devices, or credit history, not to mention any public writing and social media use. Digital records can be searched using algorithms for pattern recognition, meaning that we have lost the default assumption of anonymity by obscurity. Anyone can be identified by facial recognition software or data mining of our shopping or social media habits. These online habits may indicate not just the basic facets of our identity, but our political or economic predispositions, or motivations, and offer insight into what strategies might be effective for changing these.

Machine learning allows us to extract information from data and discover new patterns, and is able to turn seemingly innocuous data into sensitive, personal data. For example, patterns of social media use can predict personality categories, political preferences, and even life outcomes. Word choices, or even a digital handprint as to style, can indicate emotional states, including whether someone is lying. This has significant repercussions for privacy and anonymity, both online and offline. AI applications based on machine learning need access to large amounts of data, but data subjects have limited rights over how their data are used. With this in mind, how does the right to privacy surpass the forever-learning entities?

Preceding the AI Act, the EU has also adopted the General Data Protection Regulations (GDPR) in 2016, to protect privacy rights and interests. However, these regulations only apply to personal data and not the aggregated “anonymous” data that are usually used to train AI models. In addition, personal data, or information can in certain cases be reconstructed from a model, with potentially significant consequences for the regulation of these systems. For instance, while people have rights regarding how their personal data are used and stored (via GTCs, or General Terms and Conditions), they have limited rights over trained models. Instead, models have been typically thought to be primarily governed by varying intellectual property rights, such as trade secrets. For instance, as it stands, there are no data protection rights nor obligations concerning models in the period after they have been built, but before any decisions have been taken about using them. This phenomenon particularly challenges many aspects of the right to privacy.

AI has important repercussions for democracy, and people’s right to a private life and dignity in that context as well. For instance, if AI can be used to determine people’s political beliefs, then individuals in our society might become susceptible to manipulation. Consequently, the AI Act clearly deems this unacceptable. Political strategists could use this information to identify which voters are likely to be persuaded to change party affiliation, or to increase or decrease the probability of actually casting a vote, and to apply resources to persuade them to do so. Such a strategy has been alleged to have significantly affected the outcomes of past elections in the UK and USA.

The most famous such attempt was the Cambridge Analytica scandal of the 2010s, when personal data belonging to millions of social media users was harvested and then analyzed without their consent by the British consulting firm, Cambridge Analytica, aimed predominantly at political advertising in the UK prior to a vote. However, in the USA, the data collection did not stop during the election period. Donald Trump’s 2016 presidential campaign used data collected without consent to build psychological profiles, determining users’ personality traits based on their Facebook activity. After the violation of the GTC-s, the users’ data was used as a micro-targeting catalyst for customized messages. Breaches of consumer trust and resulting breaches of contract exacerbate the legal complexities of such practices.

Alternatively, if AI judges people’s emotional states and gauges when they are lying, people could face persecution by those who do not approve of their beliefs, from bullying by individuals to missed career opportunities. In some societies, it could lead to imprisonment or even death at the hands of the state. Surveillance networks of interconnected cameras provide constant surveillance over many metropolitan cities. In the near future, vision-based navigation drones, robots, and body cameras have expanded this surveillance to rural locations and one’s own home, places of worship, and even locations where privacy is considered sacrosanct, such as bathrooms and changing rooms.

Most of the arguments above focus on the importance of privacy, highlighting how AI’s ability to judge people’s emotional states and detect lies could lead to severe consequences. Given these concerns about privacy, it is essential to consider how they also impact freedom of expression. When people feel constantly monitored and fear that their private thoughts and emotions might be exposed or misinterpreted, their willingness to freely express themselves diminishes. This chilling effect on free speech directly reflects the broader arguments made about privacy: the erosion of privacy can stifle individual autonomy and freedom, underscoring the intertwined nature of these fundamental rights.

Freedom of speech and expression is a fundamental right in democratic societies. This could be profoundly affected by AI, as it has been widely touted by technology companies as a solution to problems such as hate speech, violent extremism, and digital misinformation. Automated content removal risks censorship of legitimate speech; this risk is made more pronounced by the fact that it is performed by private companies, sometimes acting on the instruction of the government. Heavy surveillance affects freedom of expression, as it encourages self-censorship.

The unknown has always raised concerns for the individual, and this new wave of technology is definitely something that makes the average person uneasy. The question the member states are going to have to answer shortly is: when will artificial intelligence yield the pendulum of natural intelligence?


Réka Kérész is a fourth-year law student at the Széchenyi István University, Ferenc Deák Law School in Győr, Hungary. Her main interests include Constitutional Law, International Law, and the combination of the two.

Print Friendly, PDF & Email