Submission
Privacy Policy
Code of Ethics
Newsletter

EU AI Act: Some Considerations to Think About—Part I.

The European Union AI Act will certainly be a piece of legislation that will fundamentally change the way AI-based systems can operate in the EU, and perhaps even in the world. The draft has been criticized a lot in the past for the vagueness of the definitions it uses. That’s why in today’s article we will go around the definition of “AI system” in the draft, as well as the issue of “error-free” training data in high-risk AI systems. The latter may not be a feasible expectation in practice.

Just a few weeks ago, the European Parliament, after lengthy negotiations, approved the Artificial Intelligence Act (AI Act), which has the dedicated aim of guaranteeing security and respect for fundamental rights while encouraging innovation. The law was proposed by the European Commission in April 2021 and will form an integral part of the EU’s digital strategy. This was finally approved by a large majority of MEPs after a multi-stage process.

According to the European Parliament’s communication, the regulation aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability against high-risk AI. At the same time, it was stressed that it is equally important that the new regulation boosts innovation and ensures Europe’s leadership in the development of artificial intelligence.

As we approach the entry into force of the AI Act, it can now be said that the EU has certainly taken ambitious steps to advance the regulation of artificial intelligence (AI) systems, particularly for foundation model developers and generative AI systems. Much has been written about earlier versions of the draft as it has been developed. Among other things, it has been discussed how EU legislation, as a kind of regulatory export, could influence the US AI-related legislation. But it has also put the spotlight on how timeless the new legislation will be in a rapidly changing technological world. However, perhaps the most known element of the Regulation is that it sets obligations for AI based on the potential risks and the extent of the impact.

According to the proposal, each AI system can be categorized as a “minimal risk”, “limited risk”, “high risk” or “unacceptable risk” system. For the latter, the draft proposes a total ban on the applications in question, while for applications classified as “high risk”, it sets out strong quality assurance, accountability, and transparency requirements for both operators and developers, and for the rest of the AI supply chain.

For a piece of legislation of this magnitude, it is not surprising that the draft itself is as complex as the problems it is intended to regulate. To illustrate how the AI Act will create a new situation once it enters into force, we highlight two phenomena here. The first is the draft’s definition of AI systems, and the second is the requirement/expectation of correctness in relation to the so-called high-risk AI systems, regarding their training data.

The concept of “artificial intelligence system” (or “AI system”) is a good example of how to approach the issue of definitions in legislation in a rapidly changing technological area. And the requirement on the correctness of training data illustrates the hidden complexities of some criteria that may seem simple at first sight.

It is an interesting phenomenon that, especially in recent years, the term Artificial Intelligence has become more of a catchword than a technical definition. Almost every technology company now campaigns that the solution it offers is “AI-powered”. Of course, in most of these cases, developers mean by “AI” some machine-learning (ML) model or algorithm.

It is worth noting here that of course the capabilities and application areas of ML models are not homogeneous. For instance, a Support Vector Machine is a ML algorithm, which in fact learns only decision boundaries. Put simply, it can decide, for example, whether an object we show it is an orange or a pear (or somewhere in between, if we feel like crossbreeding).

Machine learning, on the other hand, also results in the highly sophisticated generative models that drive today’s most popular conversion agents (chatbots) in the background. Examples include OpenAI’s ChatGPT, the Gemini models developed by Google, and even the latest brainchild, Anthropic’s Claude 3. Of course, they are much more than just chatbots; the GPT-3 model, which is the basis of ChatGPT, can be further developed to support programming tasks. A prime example of this was when developers were able to start using a version of the model as part of GitHub’s Copilot service. In this case, the model provided real-time auto-completions to the already written code as a kind of assistant, helping the programmers be more productive in their daily activities.

The problem is further complicated by the fact that artificial intelligence, as a single and accepted term or set of necessary and sufficient conditions, does not really exist. To date, AI research has not come up with a definition that describes exactly what we consider intelligence to be, and hence what its “artificial” equivalent might be. This is not surprising, of course, since the concept of intelligence is usually discussed from a human perspective. In other words, whatever we mean by it, the measure of the concept is always human. This also implies that the concept of “intelligence” becomes a multifaceted and abstract one, encompassing a wide and varied range of cognitive abilities. The situation is not made any simpler by the fact that attempts to define it reflect, of course, differing scientific views, cultural influences, and often individual beliefs.

After clarifying the above framework, the next part of the article will look specifically at how the EU has responded to the challenges of defining an “AI system”, and why it is problematic to expect a development process to use perfect training data.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email