Submission
Privacy Policy
Code of Ethics
Newsletter

Facial Recognition Technologies, AI Act and GDPR: A Short Risk-Based Evaluation

The use of biometric data processing systems, in particular facial recognition technologies (FRTs), has become a phenomenon for law enforcement agencies (LEAs) in recent years. The purpose of the operation of FRTs varies widely, but most often relates to three categories: authentication, identification, and profiling purposes. Whatever the purpose for which the FRT is used, it is the personal data (most likely biometric data) that the system processes. This data is typically facial features collected from real-time images or recordings of public or private spaces. Some of the EU Member States such as France and Germany use FRTs equipped with different methods, mostly post-remote methods, though they reflect tendencies in operating live FRTs.  While algorithms start and go through their lifecycle with (personal) data, the GDPR and the AI Act stand for regulating data processing activities within FRTs. These two regulations complement each other in terms of identifying and mitigating the risks associated with data processing activities within FRTs. The operation of FRTs in public spaces is restricted in the AI Act, as it is classified as a very high-risk application, but there are still risks that need to be identified and mitigated before an FRT is deployed. In this short essay, I would like to highlight those risks in accordance with the GPDR and the AI Act regarding the operation of FRTs in public spaces.

Potential risks in operating FRTs under the GDPR and AI Act

  1. Data Minimization and Anonymity

The principle of data minimization in the GDPR presents practical and legal challenges for FRTs operating in public spaces, compromising individuals’ right to remain anonymous. Constant data collection with no room for escape from surveillance cameras, both live and post-event, restricts individuals’ ability to exercise their rights such as the right to restriction of data processing and the right to be forgotten. The AI Act’s inconsistent definition of biometric processing systems exacerbates these issues by overlooking the technical nuances of FRTs. The AI Act refers to at least three different terms which are “remote biometric identification system” (Article 3 (36)), “real-time remote biometric identification” (Article 3 (37)), and “post-remote identification systems” (Article 2 (6a)). While the definitions of the terms are explained in the legislation, the plurality of the terms without further context may complicate clear understanding for both LEA and the developers.

  • Purpose Limitation and Function Creep

Ensuring that AI algorithms operate for a single defined purpose is a significant challenge under both the GDPR and the AI Act. Function creep, where algorithms are repurposed beyond their original intent, raises concerns about potential misuse by private companies (and LEAs) for developing new products or surveillance tools. Imagine that Clearview AI was created first as an entertainment application, and after collecting enough training data from individuals, it became one of the commonly used predictive policing tools containing around 30 billion images mostly consisting of biometric data. When the objective of operating an FRT tool is compromised, the question of responsibility becomes even more complex to address. The AI Act complicates developers’ responsibilities by creating ambiguity in the operational contexts of FRTs within LEAs since it blurs the lines between real-time and post-remote identification (see broader notes for this topic, here).

  • Data and System Accuracy

The accuracy of data processing under the GDPR and system accuracy under the AI Act present both technical and legal challenges for FRTs. Individuals unaware of their data being processed by FRTs face difficulties in correcting and updating their information, potentially leading to inaccuracies (a.k.a algorithmic discrimination) for themselves and their associated groups. A solution for increasing the reliability of the system could be to continuously update police databases with data from the operational field of FRTs and improve the accuracy of the algorithm. However, continuous data processing to improve accuracy poses privacy risks, particularly when collecting new training data from operational fields without being able to differ whom to include and whom not in the system (further, and possibly, without their consent). The AI Act’s requirement for developers to determine and apply accuracy levels, without mandated collaboration with LEAs, adds further to the complexity.

  • Administrative Challenges and Accountability

The administrative challenges faced by LEAs in using FRTs are significant, primarily due to the outsourcing of software and hardware to the private IT sector. This creates a complex network of responsibility and reliance on external expertise, which can lead to a lack of technical understanding within LEAs and impede their ability to assess human rights risks. The intricate flow of data between multiple entities complicates accountability, posing a risk to public trust if not managed transparently. The AI Act adds ambiguity by defining law enforcement activities carried out “on behalf” of LEAs without specifying the involved entities.

Possible ways to mitigate the risks

The GDPR and the AI Act offer complementary frameworks for addressing the risks associated with biometric data and FRTs. The GDPR emphasizes individual rights, data minimization, and accountability, requiring Data Protection Impact Assessments (DPIAs) to evaluate risks before processing personal data. However, the GDPR does not cover potential state surveillance, bias, and accuracy tests, which are addressed in the AI Act.

The AI Act introduces minimum standards to ensure that fundamental rights are respected throughout the lifecycle of AI systems, including conceptual safeguards and obligations for those involved in the development, marketing, and operation of these systems. High-risk AI systems must undergo a Fundamental Rights Impact Assessment (FRIA) before market placement, similar to the DPIA required by the GDPR. However, the AI Act requires FRIA only for aspects not covered by the DPIA to avoid additional burdens, establishing a complementary connection between the two assessments.

Addressing the risks associated with biometric data and FRTs requires a comprehensive approach that integrates the strengths of both the GDPR and the AI Act. By focusing on data minimization, purpose limitation, accuracy, and accountability, these regulatory frameworks can mitigate the potential negative impacts of FRTs and promote their ethical use. Continuous refinement of these frameworks, guided by ongoing research and stakeholder engagement, is essential to ensure they remain effective in the rapidly evolving landscape of AI and biometric technologies.


Dr. Gizem Gültekin-Várkonyi is an assistant professor at the University of Szeged, Faculty of Law and Political Sciences, International and Regional Studies Institute. She completed her dissertation on the applicability of the GDPR on social robots and is currently working on the intersection of data protection and AI technologies in a broader sense. Her further research interests include diverse scientific methodologies such as futures research, as well as EU digital policy and data policy.