Submission
Privacy Policy
Code of Ethics
Newsletter

AI Development Within Legal Boundaries: The Role of Regulatory Sandboxes in the EU

AI systems are having an increasingly significant impact on everyday life; at the same time, they are introducing new risks. In response, the EU has made the introduction of regulatory sandboxes mandatory, offering a safe environment for development. The objective is to bring AI solutions to market that are lawful, transparent, and socially acceptable.

Artificial intelligence (AI) has experienced rapid and striking progress in recent years. A growing number of organizations are using AI-based solutions that once seemed unimaginable, such as in medical diagnostics, legal analysis, or recruitment. As technology spreads more widely, new and more complex risks are also emerging. These include algorithmic bias, lack of transparency, and the potential violation of fundamental rights.

This raises a crucial question: how can we ensure that an algorithm functions fairly, transparently, and without discrimination? Equally important is that the system remains auditable, even if certain parts of its operation are not fully understandable, even to its own developers (so-called “black box” systems). Addressing these challenges is not only the task of developers; legislators must also respond actively.

To address these issues, the European Union adopted the AI Act in 2024, introducing a comprehensive and risk-based regulatory framework for artificial intelligence systems. One of the key components of the regulation is the establishment of a so-called regulatory sandbox; a controlled testing environment that each Member State is required to set up by 2026, either independently or in cooperation with other countries. This sandbox functions as a protected testing environment, where developers can trial their AI systems under real but controlled conditions—before those systems are released to the market. The process involves the active participation of the regulatory authority, which helps ensure that the development meets legal, ethical, and safety standards. This is particularly important, since AI-based systems can have a significant impact on individuals’ rights and freedoms, especially when decisions directly affect users. It is therefore essential that AI systems respect fundamental rights from the earliest stages of development and ensure the protection of users’ interests.

The concept of a regulatory sandbox is not entirely new. It first emerged in the financial sector when the United Kingdom’s Financial Conduct Authority (FCA) launched its fintech sandbox program in 2015. Within this framework, startups developing fintech services were able to test their solutions under real but controlled conditions, under the direct supervision of the UK’s Financial Conduct Authority (FCA) and with structured professional support. The initiative proved successful in many respects; a significant number of participants were able to enter the market successfully, and the program was later adopted by several other countries in their own regulatory systems. Although criticisms were raised, for instance regarding the lack of impact assessments after the market launch of sandbox-tested solutions, interest in the model remained strong. It was eventually incorporated into AI regulation and became a regulated practice at the EU level.

A look at international trends shows that testing in a “regulated environment” is increasingly becoming a common practice in AI regulation, although implemented through various approaches. In the United States, for example, the state of Utah offers temporary exemptions for innovative AI solutions, while the authorities directly monitor their development. Since 2019, the United Arab Emirates has operated its “Regulations Lab” initiative, considered one of the world’s first AI regulatory sandboxes. The OECD has also explicitly recommended that sandboxes provide an appropriate framework for the experimental testing and scaling of AI systems. These examples show that regulated testing environments are gaining significance not only within the EU but also on a global scale.

Perhaps the most important goal of the EU’s AI regulatory sandbox is to enable close collaboration between developers and regulatory authorities. The process begins with the preparation of a detailed sandbox plan, which outlines the solution to be tested, its purpose, expected impact, and potential risks. At every stage of development, the authority provides continuous feedback, supporting a clear and safe development process.

This kind of collaboration offers significant added value not only for developers but also for regulators. It allows them to gain firsthand knowledge of the latest directions in AI technologies, as well as the legal questions and opportunities that accompany them. Importantly, all of this takes place before the given technology becomes a widely available market product. Working with developers leads to the accumulation of practical knowledge that supports legal interpretation and the creation of guidance documents. Sharing experiences, learning together, and developing recommendations and guidelines are all integral parts of how the sandbox operates.

The experience gained through this process, along with the documents produced during the sandbox phase (such as the exit report or certification), can later serve as important points of reference during compliance procedures or market surveillance investigations. Participation in the sandbox increases legal certainty, as developers acting in good faith may be exempt from fines. It is important to keep in mind, however, that they remain liable for any damage caused to third parties.

The range of potential sandbox participants is extremely broad: startups, small and medium-sized enterprises, large corporations, as well as academic and civil society organizations can all benefit from it.

Based on all this, the concept of the regulatory sandbox does not in itself guarantee that every AI system entering the market will be fully compliant. Nevertheless, it clearly provides practical support for developing solutions that are safe, legally sound, and socially acceptable. The testing environment enables stakeholders to respond to emerging issues in a timely and controlled manner, which is particularly important in a fast-changing and complex technological landscape.


István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.