Submission
Privacy Policy
Code of Ethics
Newsletter

AI Act vs. reality: technical obstacles to compliance and the possible workarounds – Part II.

Fundamental-rights compliance: everyone wants something different

A persistent tension emerges between industry actors and civil rights groups. Industry argues that it is unclear why they are held to detailed human-rights compliance, especially on anti-discrimination, while other actors with comparable risk face no such duties or only much lighter ones. Civil society groups, by contrast, fear that any easing or delay would weaken protections, and they explicitly warn against postponing implementation even in time only. According to the cited sources, these positions rarely converge.

From a technology perspective the issue is compounded by the fact that assessing data-governance and testing requirements for high-risk systems (for example representativeness, suitability, reduction of error rates) typically requires “group-level” performance measurements, including along groups that count as special categories of personal data under the GDPR (for example racial or ethnic origin, religion, health, etc.). The GDPR and related EU data-protection rules allow processing of such data only under strict conditions, which means developers may lack access to test datasets with the necessary labels to credibly demonstrate non-discrimination.

The White Paper from the Bertelsmann Foundation and the German AI Association identify two avenues for solutions:

  • issuing detailed, harmonized guidance and codes of practice to evidence compliance, for example publishable test protocols and documentation templates, and
  • using predefined, pseudonymized test suites and reference datasets released by a regulator or a standards body.

Patchwork EU digital rulemaking

Many believe that the most serious problem lies in the layering of rules. A single AI incident can trigger several reporting regimes at the same time: data protection (a data breach under the GDPR), cybersecurity (reports under NIS2), and a “serious incident” under the AI Act. These require different recipients, content elements and deadlines.

What does this mean in practice?

  • Recipient: GDPR reports usually go to the national data protection authority; NIS2 incident reports typically go to the national CSIRT or the competent cybersecurity authority; “serious incidents” under the AI Act fall to the market surveillance authority.
  • Subject and focus: under the GDPR the focus is on personal data protection and risks to data subjects; NIS2 concentrates on service security and operational continuity; the AI Act assesses product/system safety and broader societal risk.
  • Content elements: the reports ask for different details (for example categories of personal data affected, technical cause and mitigation, system functions and description of abnormal behavior), and they do not use the same formats.
  • Deadlines: each regime uses its own timetable, so the same event can generate multiple reports that fall due at different times.

Moreover, the interpretation of reporting requirements is also influenced by different definitions. A good example of conceptual divergence is the definition of “biometric data” and “biometric identification” in the GDPR and in the AI Act. The definitions are not the same, yet they directly affect risk assessments and documentation for systems that use biometric identification or categorization. Because of this definitional gap, the same solution can fall into different categories under the two laws, which creates divergence in corporate compliance matrices.

The Commission’s upcoming digital “omnibus” package is intended to address this divergence. Its goals are to clarify key concepts (for example biometric data), reduce parallel reporting and align deadlines, as well as to issue consistent, reusable guidance and documentation templates. The European Parliament’s research links all this to significant implementation complexity and administrative burden, since AI-related obligations run in several EU rule sets that are not fully aligned, with differing concepts, reporting tracks and deadlines.

Persistent disadvantages for small businesses

The White Paper makes it clear that, despite all easing measures and sandboxes, small and medium-sized enterprises end up at a disadvantage. It is worth pausing on what a regulatory sandbox is, since much of the proposed relief for SMEs depends on how these sandboxes work in practice. A regulatory sandbox is a supervised, time-limited and scope-limited test environment where developers can trial solutions under real or realistic conditions with targeted, temporary regulatory flexibilities in place. Its purpose is to allow, under controlled conditions:

  • assessment of product or system compliance and risk management,
  • verification of how procedures and documentation work in practice, and
  • clarification of supervisory expectations.

Benefits include early feedback and guidance from authorities, a single point of contact, proportionate and case-by-case calibration of obligations, faster iteration and time saved to market, and the ability to reuse test results and logs later in conformity assessment. A sandbox does not waive core safety and fundamental-rights requirements, and it is limited in time, scope and participants. It typically operates with detailed entry criteria, reporting duties and safeguards such as data-subject protections and supervisory controls.

However, most SMEs do not have an in-house legal and compliance team, so if they want to develop AI, they must hire someone to manage this from the outset. In many cases they also rely on open models because these are cheaper and more flexible. This is where the circle closes: because open-source models can lack the kind of transparency that the law relies on, SMEs may be unable to evidence all the requirements for high-risk applications. The result is either that they do not build such solutions at all, or they turn to the AI services of major cloud providers because they assume compliance there. Over time this leads to market concentration and, from a European perspective, greater dependence on large providers that are typically not European.

In the short term the solution might be to issue more targeted guidance, adopt standardized documentation templates, set up central points of contact and, where justified, allow reasonable extensions to application deadlines. These are technical steps that keep the Act’s fundamental-rights objectives intact while making implementation more practical.

The debate around the AI Act’s practical usability and flaws is not about weakening fundamental rights or transparency. The real issue is that implementation is, in several places, unnecessarily heavy and overlapping. In the short term, a workable path could be to refine the sectoral scope of Annex III, set clear cooperation duties for GPAI providers, consistently harmonize concepts and procedures, and support implementation with SME-friendly tools such as guidance, common templates, central points of contact and realistic timelines. If these steps go ahead, the AI Act can both build trust and leave room for innovation; if they do not, the market is likely to drift toward slower projects and greater dependence on large, non-European providers.


István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.