Submission
Privacy Policy
Code of Ethics
Newsletter

The World Innovates—Europe Regulates: AI Act

Did the EU rush to adopt the first comprehensive AI regulation produce weaker results than anticipated? Is it possible to balance the need for regulation and protection of human rights with the need to compete in the playing field of tech innovation? Is the EU AI Act outdated before even becoming enforceable? I will endeavor to answer all of these questions below.

In the last few years, after the storm raised by Chat GPT, the whole world has started talking about artificial intelligence. However, artificial intelligence, or its idea, has existed for a long time. Can machines think? Alan Turing developed the Imitation Game in 1949—a test distinguishing AI from human intelligence. John McCarthy coined the term itself in 1956, but some seventy years later, artificial intelligence experienced a real boom and became a hot topic in almost all corners of the world. The race for supremacy in innovation and technological achievements has begun, and the leading countries have already profiled themselves.

On the other hand, lawyers have been thinking not only about how AI could be used to improve our lives—including the legal system—but also about the possible dire consequences of using AI in everyday life and in the legal system for years. What we mostly agree on is that we should try to direct the development and use of artificial intelligence through regulation in a way that makes it practical and serves people while ensuring that fundamental human rights and human dignity—guaranteed by numerous international documents—remain intact and protected. For example, when we think about constitutional law, at first glance, it could be said that AI and constitutional law have no points in common, but at the intersection, there are perhaps the most terrible dangers for the constitutional rights of citizens. 

What is the difference between AI and other modern technologies that results in a situation where only AI requires special attention from the legislator? In 2020, the European Commission identified several problems in the development and use of AI systems that stem from their specific characteristics: complexity, opacity, continuous adaptation and unpredictability, functional dependence on data and data quality, and, probably, the most critical, autonomous behavior. So, although today we still have weak AI and the need for human oversight, once an AI system is deployed, it infers and autonomously decides on the result based on the input it received. Neither the provider nor the deployer can predict the outcomes with certainty, which might be the trickiest and scariest part of all. It is not difficult to conclude how legally problematic this unforeseeability could be in possible future disputes.

Proponents of the development of modern technologies have always pointed out that excessive regulation slows down development and automatically positions the state as uncompetitive in the leadership race. On the other hand, skeptics are aware of the possible dangers, such as fraud, manipulation, privacy harm, discrimination, etc. However, they believe that stricter regulation is the best prevention—i.e. offense is the best defense.

The European Union promotes values such as respect for human dignity, equality, freedom, the rule of law, democracy, and respect for human rights, all emphasized in the European Union Charter of Fundamental Rights.

One of the foremost concerns in the context of AI in Europe is the potential for privacy and data protection rights violations. The General Data Protection Regulation (GDPR) sets stringent requirements regarding the processing of personal data, emphasizing individuals’ rights to privacy and control over their information. AI systems often require access to large datasets, including personal data. The deployment of AI technologies must, therefore, adhere to EU data protection standards, ensuring transparency, consent, and the minimization of data collection. Balancing the innovative potential of AI with robust data protection measures is essential to upholding fundamental privacy rights.

The principle of non-discrimination, as enshrined in Article 21 of the Charter of Fundamental Rights, is also one of the main concerns because AI algorithms can perpetuate existing biases, leading to discriminatory practices in law enforcement, hiring, and judicial decisions. For instance, this is easily explained with the phrase “garbage in—garbage out” which basically means AI is dependable on the quality of the dataset it was trained on. Therefore, if an AI system used for predictive policing is trained on biased data, the outcome could be troubling, as it may reinforce discriminatory profiling, thereby contradicting the extremely important principle of equality. It has been proven numerous times that AI facial recognition tools, for example, perform much better with a certain demographic (white males), while they may be prone to mistakes when identifying others (the most mistakes were made when trying to identify black women). It is important to address such algorithmic biases to ensure fairness in AI outcomes and maintain compliance with European non-discrimination standards.

The mere idea of AI being used in courtrooms all over the world raises major concerns. Whether AI is used to gather evidence or assist judicial decision-making, I must argue it endangers the main principles of law such as the presumption of innocence, rule of law, due process, and the right to a fair trial. The latter is guaranteed by Article 47 of the EU Charter, seeking transparency of judicial processes and the effective participation of affected individuals. Sentencing and determining bail should be transparent processes. Decisions that carry such weight should not be guided by imperfect and opaque algorithms as individuals may be unable to challenge the basis of such choices, undermining their right to due process.

If we decide it is inevitable to use AI systems within the courtrooms, we must ensure that these systems are explainable and subject to scrutiny.

Moreover, the principle of accountability must be considered when deploying AI technologies within the EU. Determining liability and responsibility can become complex when AI systems make decisions with significant consequences for individuals. Establishing clear legal frameworks that hold both AI developers and users accountable for their outcomes is essential in reinforcing the rule of law.

So, the logical question to ask is: Is AI regulated? An increasing number of countries are formulating and enacting AI governance legislation and policies. While the United States initially adopted a lenient approach toward AI, there has been a growing call for regulation. The White House has published the Blueprint for an AI Bill of Rights, a framework designed to protect the rights of individuals in the era of AI, and President Joe Biden signed an executive order on AI in 2023. In contrast, the Chinese government has endorsed guidelines on generative AI. Brazil, Canada, and Australia have also come up with their own AI regulations. At the international level, the Organization for Economic Co-operation and Development (OECD) established non-binding Principles on AI in 2019, UNESCO introduced a Recommendation on the Ethics of AI in 2021, the G7 agreed upon International Guiding Principles on Artificial Intelligence in 2023, and the Council of Europe ratified an international convention on AI in May 2024.

To harness the benefits of AI while at the same time protecting the so-called European values, the EU is already taking steps to shape a regulatory framework emphasizing ethical AI development. By placing human dignity at the forefront of AI governance, the EU aims to create an environment where innovation does not come at the expense of individuals’ rights and values. Surely, everyone is familiar with the saying that America and China innovate, and Europe regulates. Aware that it probably cannot match some countries in terms of technological progress, the European Union decided to be a champion in adopting regulations that others will emulate. Thus, the first comprehensive law on artificial intelligence was created on European soil, which many expect will have the so-called Brussels effect (as was the case with the GDPR). The AI Act was not created overnight, but despite that, I would argue it was quite rushed. For years, various documents have been adopted in Europe, which, in a way, prepared the ground for the adoption of a regulation that will be the guiding star in the unknown. The Council of Europe, the European Commission, and the European Parliament were particularly active. Until the adoption of this regulation, which is binding for all member states, European countries could each determine in their own way how they would deal with the legal uncertainty brought by the dawn of AI and whether they would protect their citizens by choosing soft law to facilitate the development of AI and support innovation, or by choosing hard law that would achieve better protection with stricter bans, but in doing so, it may stifle opportunities for innovation and investment.

The European Union wanted to take a big step forward with the AI Act, but I believe it has only partially succeeded in creating a legal framework worth emulating. Some even believe that through the Act, the EU closed the door to innovators and investors, imposing restrictions and obligations that are non-existent in other jurisdictions. The AI Act is mainly construed as a product safety regulation, meaning the focus is on risks. Although the risk-based approach is probably the best, and with its many exceptions it leaves place to maneuver, it only really tries to prevent harm without mentioning liability, which means we still need AI liability legislation. When we read the text of the Act itself, some provisions appear redundant and too narrow, while exceptions are so broad that they weaken the main provisions. The next possible problem is that the Act may be outdated once it finally becomes enforceable. For example, since I have already mentioned the now famous Chat GPT in the introduction, I must briefly refer to the fact that the legislator did not plan to regulate generative AI in the Act at all until that moment, so it was subsequently specifically added. Even though the AI Act entered into force on August 1st, 2024, piece by piece, it will become enforceable according to the schedule provided in the Act itself. It is easy to conclude that any legislative process has a hard time keeping up with the pace of technological development. However, I must admit that I also see the light at the end of that tunnel since the legislator seems to have thought of this by envisaging the possibility of amendment through delegated acts. The Act will be evaluated every 4 years in this respect, and the entire Act will be reviewed 5 years after it enters into force (August 2nd, 2029). This ensures the Act always considers the present-day conditions and keeps up with technological advancements.

Unlike in the EU, Switzerland will not have a separate law regulating AI; instead, existing laws will be applied and, if necessary, adapted selectively, while the UK will position itself somewhere between the USA’s soft law AI regulation and the EU’s hard law approach.

To conclude, many countries have followed with interest what solutions European lawyers have found and will certainly incorporate many of them into their legislation. Certainly, we are only at the very beginning of the AI era. Until a few days ago, we believed that we would not experience artificial general intelligence in a few lifetimes, but renowned experts claim that we can expect it as early as next year. The speed at which technology is evolving and all the dangers it can bring to the legal order are a bit scary and almost impossible to keep track of, so it will be interesting to see how the AI Act will withstand the test of time and whether it will provide sufficient, if any, protection.


Doris Skaramuca was born in Dubrovnik, Croatia on October 21, 1990. She graduated from the Faculty of Law, University of Zagreb, Croatia. Her main interests are bioethics, criminal law, international law, human rights, and artificial intelligence. For the last 7 years, she has been researching artificial intelligence and its correlation with law, with special emphasis on the legal regulation of AI. She did her master thesis on “Legal regulation of AI in development and production of medicines.” She was a junior associate at a corporate law office in Zagreb, Croatia. Currently, she is a PhD and LLM candidate at the University of Miskolc, Hungary, and a researcher at the Central European Academy in Budapest, researching the implications of artificial intelligence in criminal law matters.

Print Friendly, PDF & Email