Submission
Privacy Policy
Code of Ethics
Newsletter

Mónika MERCZ: Is it “I” or “AI”? – The legal questions of personality profiling by Artificial Intelligence

When we say the word “I”, it entails certain aspects of the self: our name, age, outlook on the world, as well as several other factors that shape who we are as a person. Oftentimes, we do not know the full extent of what this one syllable is comprised of. Despite us not knowing ourselves, we must take into account several factors, such as the fact that we have a blind spot: a part of ourselves that we do not know, but others could easily see. There are various dangers that come with us being blind to the perception of others, and with the introduction of artificial intelligence (AI), this discrepancy between what we see as “I” and what companies, the government or any private person capable of using an AI can see have drifted apart significantly. This has devastating implications when it comes to privacy, as we could quite literally lose control over who knows what about us and to what extent. This is why I believe that talking about the dangers that AI has when it comes to the possibility of creating a personality profile is crucial in today’s day and age. Are there legal safeguards in place to help keep the individual opaque and the state transparent? Where is the line which separates us as individuals and us as simple pools of data, easily used and known?

The mortifying ordeal of being known is in the present case truly mortifying in the real sense of the word, as AI can make a personality profile from basically anything we do on the internet: tracking cookies collect information about us, our fingerprints are capable of helping certain softwares recreate our facial features, just like facial recognition can become widely used thanks to its usage in unlocking our phones, and even at-home DNA testing has its dangers when it comes to our privacy. There is also a possibility for AI to copy a person’s voice after a few seconds of audio. Add this to the existence of deep fakes and we have a technology which could look, sound and think like any person whose personality the user wishes to emulate. In order to ensure that horrible misuse of collected data does not come to pass, we must take action in regulation and enforcement as well.

Firstly, I would like to stress that the Hungarian Constitutional Court has thought it imperative to stop a personal identification number from coming into existence even in 1991. In Decision 15/1991 (IV.13.), the right to the protection of personal data has appeared, where the Constitutional Court does not interpret the right to the protection of personal data as a traditional protective right, but as an informational right to self-determination, with regard to the active aspect of this right. This, even in 1991 meant for them that everyone has the right to decide about the disclosure and use of his/her personal data, and that approval by the person concerned is generally required to register and use personal data; the entire route of data processing and handling shall be made accessible to everyone, i.e. everyone has the right to know who, when, where and for what purpose uses his/her data. The principle of shared information systems and the prohibition of a single identifier is introduced to protect the citizen against the creation of a single “identity profile”. All of this is coming full circle when we take a look at the GDPR, with its principles woven throughout our interpretation of what data protection is and why it is necessary. This is considered to be a landmark case in Hungary not just because of its nature as a forebearer of GDPR’s principles, but because it is still affecting decisions made by the Hungarian Data Protection Authority. I would venture to say that its importance will come up again, but in the context of the new AI draft legislation, which would prohibit quite a few of the technologies that could be used for profiling.

The banning of personality profiling is a noble goal indeed. However, enforcement might pose a problem, as it would be extremely lucrative for private companies as well as useful for governments to know the ins and outs of people, and the possibility of paying a fine might not prove to be of much protection for us.

So, what are our options? As the possible consequences of personality profiling by AI are horrendous, stricter and stronger regulations are imperative. A profile made of individuals could cause serious societal problems, with individuals drifting apart from each other, losing their free will and identity. The Chinese system has shown us what some of the outcomes are in a digital dictatorship, and a more nefarious, subconscious approach could be used to influence humans across the globe, called “nudging”, which is a behavioral change caused by outside influences. The constant stream of content on platforms, the introduction of virtual reality and other technological advances all point towards a future where combining databases consisting of information about certain individuals is not only possible, but also not very difficult. There have already been some instances of AI’s influence culminating in unspeakable tragedy. For example, a man committed suicide because of his close relationship with and emotional reliance on a chatbot, Japan has certain workplaces where emotional recognition tools are already in use, and the Metaverse is full of possibilities to commit crimes.

In order to stop the negative effects of AI from spilling over to our everyday lives and to most importantly make profiling as scarce as possible, there are certain steps that must be taken. The protection of the individual must be given priority through legislation at the level of the European Union and the Member States as well, and these pieces of law should be enforced to the highest possible degree. The fact that the AI draft legislation contains among its prohibited Artificial Intelligence practices “guarantee that natural persons are properly informed and have free choice not to be subject to profiling or other practices that might affect their behaviour” is – in my opinion – not enough to respond to this level of risk. Prohibition is only effective if there is significant power behind its enforcement. This is why the European framework for AI should also have a body strictly working on not just AI governance, but profiling in particular. Dangerous and prohibited AI technologies deserve the highest level of attention that the EU can give. 

Social programmes are also desperately needed to strengthen communities and families, so that negative societal effects might be mitigated. Additionally, education on the dangers and proper usage of AI, raising user awareness is a key component for a successful transition into the next phase of our lives, where artificial intelligence lives with us. Strengthening data protection across the globe and having legislation put in place as a safeguard should be the goal of all countries. However, the perception of data protection itself varies from culture to culture and from legal system to legal system. Significant players such as India are not planning on adopting AI regulation, which is also a problem for a possible cooperation.

Because AI is a worldwide phenomenon and all countries could possibly have a role to play in its development, a bigger approach is needed than the European Union’s attempt at rectifying a frightening situation. Of course, it is a start and we must celebrate all victories. However, the path towards reliable and safe AI is quite long. Personality profiles in particular deserve our attention, so that the “I”, the self of an individual is at least protected from outside forces which would aim to influence it, use it, commercialise it or otherwise reveal it to the world. 

What does the future hold for AI profiling? We will probably only get an answer once the draft regulation in the EU has been in place for a few years, or when we acknowledge that we have let the genie out of the bottle with the unleashing of AI (which has a black box, so we know little about its inner mechanisms). We must dare to look at the intricacies of our new world with an interdisciplinary approach, ask questions and hold companies accountable or accept that hope is all we have left – which could usher in a less than favorable lack of right to self-determination during the next decades.


Mónika Mercz, JD, specialized in English legal translation, Professional Coordinator at the Public Law Center of Mathias Corvinus Collegium Foundation while completing a PhD in Law and Political Sciences at the Károli Gáspár University of the Reformed Church in Budapest, Hungary. Mónika’s past and present research focuses on constitutional identity in EU member states, data protection aspects of DNA testing, environment protection, children’s rights and Artificial Intelligence. Email: mercz.monika@mcc.hu

Print Friendly, PDF & Email