Submission
Privacy Policy
Code of Ethics
Newsletter

AI Act vs. reality: technical obstacles to compliance and the possible workarounds – Part I.

The EU’s AI Act promises a unified framework for trustworthy AI, yet day-to-day implementation is already colliding with overlapping rules, uneven obligations and missing standards. Interviews and recent analyses show the biggest friction points: a one-size-fits-all approach, transparency gaps for GPAI and open-source models, and disproportionate burdens on SMEs. Short-term fixes are within reach if policymakers refine Annex III’s scope, require real cooperation from GPAI providers, harmonize concepts and procedures, and back it all with practical guidance, common templates, a single point of contact and realistic timelines.

Over the past decade, the European Union has gradually built an extensive and complex digital regulatory framework. First came the GDPR to protect personal data, then the DSA and the DMA to oversee online platforms and the largest tech players. Now we have reached the AI Act, which originally promised a unified European framework for developing and using AI. The Bertelsmann Foundation and the German AI Association recently published the white paper “Simplifying” European AI Regulation, which maps the practical hurdles to implementing the AI Act based on interviews and workshops.

These include overlaps between horizontal and sectoral rules, transparency gaps for GPAI and open-source models, disproportionate burdens on SMEs, and delays in technical standardization. The aim is to propose simplification and harmonization steps that do not weaken fundamental rights protection. Feedback gathered from market actors and civil society suggests that many stakeholders do not dispute the AI Act’s objectives. Their concern is that, in its current form, the AI Act will be particularly hard to implement. In parallel, the European Commission is preparing a digital omnibus package for late 2025 to reduce overgrowth in obligations and to align sometimes contradictory rules. This signals that, even at launch, the system risks becoming too heavy to support day-to-day innovation rather than suppress it. In what follows, largely aligned with the structure of the White Paper, we review the key critiques that have emerged around the AI Act, including the relevant technical details and implications where needed.

“One size fits all”

One of the main critiques is that the AI Act’s purely horizontal approach is overly uniform, so it does not capture sectoral differences with enough nuance. In other words, the regulation seeks to bring every AI technology and every use case under a single umbrella. On paper this sounds consistent because it prescribes uniform minimum requirements and procedures across sectors. In practice, though, it means that a bank’s credit-scoring algorithm, a hospital’s diagnostic tool and an AI system running in a factory would all be subject to the same very detailed compliance package. This is why several proposals argue that certain areas currently listed as high risk should be removed from Annex III (the list of high-risk AI systems) and placed under separate, sector-specific regimes. Examples from Annex III include employment, credit scoring and insurance, machinery falling under the Machinery Regulation, and medical devices.

From a technological perspective this matters because these sectors have long-standing, standardized risk-management and compliance processes. In healthcare, for instance, CE marking entails clinical evaluation, post-market surveillance and extensive documentation. If an additional layer of AI Act compliance must be inserted into the same chain, manufacturers end up following two parallel tracks that overlap in many places and sometimes even conflict. This is best described as the overlap of horizontal and vertical rules. It can lead to situations where, e.g. a medical software developer slows down development or chooses not to label certain features as AI at all to avoid falling into a stricter category.

The squeeze on downstream developers and the open-source paradox

Let’s turn to companies that do not build the base model (Foundational Model, Large Language Model, etc.) itself but develop their own applications on top of an existing, often general-purpose AI models. The literature often refers to these actors as “downstream” players. Their legal status under the AI Act depends on the specific context: if they place their own AI system on the market they qualify as a provider, and if they only use it, they qualify as a deployer. Under the current text of the Act, they would need to demonstrate a compliance level that is almost the same as the original model providers. To do that, however, they would need very detailed knowledge of the base model they are building on.

For GPAI model providers the AI Act imposes explicit transparency obligations, such as technical documentation, copyright policy and a summary of the training data. GPAI models released in open-source form are exempt from certain obligations, except where a model is classified as systemically risky. This creates problems in two directions. With a closed (non-open source) model, the downstream actor lacks sufficient technical and provenance information to evidence its own compliance. With an open-source GPAI model, several transparency elements required by law may not apply to the upstream provider, except in the systemically risky case, so the provider of a high-risk application may be unable to substantiate data requirements, testing and performance metrics.

From a technological standpoint this is serious issue, since today’s AI ecosystem mostly relies on fine-tuning, domain adaptation and re-using components. Most (European) companies do not train LLMs from scratch, but they take an open or semi-open base model and make it task-ready with their own data, workflows and targeted fine-tuning. Typical approaches include LoRA-based or adapter fine-tuning, and Retrieval-Augmented Generation where the model is given context from an external knowledge base (RAG). In this setup most of the value sits in the added layer, yet proving compliance often still requires detailed technical and provenance information about the underlying base model. If that information is not accessible, the gap between the developer’s obligations and upstream transparency quickly turns into a compliance risk in practice. If the base-model provider has no cooperation duty, the risk ultimately lands on the smaller European developer, who lacks access to all the necessary information. This is why the existing cooperation obligations should be extended to providers of general-purpose AI models. Without that, this bottleneck can in practice halt the development of high-risk AI in Europe.


István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.