Submission
Privacy Policy
Code of Ethics
Newsletter

EU AI Act: Some Considerations to Think About—Part II.

One might rightly ask, why is it important for a piece of legislation what science thinks about intelligence, or what specific technology lies behind a sound marketing slogan?

Well, the AI Act has a dedicated purpose to protect the citizens of the European Union from the undesirable consequences of systems using artificial intelligence. An essential element in any legislation of this magnitude is that it describes the definitions it applies, and in the future, the law enforcer can use these definitions as a basis when deciding on a case covered by the legislation.

The AI Act faces a double standard in this area. On the one hand, whatever definition is used, it must be specific enough to be applicable. On the other hand, it must also remain sufficiently general to avoid the need to amend the text for each new technology. This is also due to the nature of the law, which must regulate general situations in an abstract way since it is impossible to create a specific law for each specific situation. A particular difficulty with technology-related legislation, especially in the current situation of AI, is that technology is developing and changing rapidly. Therefore, the relevant law must be time-tested, which is also possible through sufficient abstraction.

Moving somewhat away from the definition of intelligence, or its artificial counterpart, the AI Act has sought to define an “artificial intelligence system” since the first draft of the legislation, which was published back in 2021. In this original draft (Commission draft 2021), Art. 3 (Definitions) defines such systems as follows:

‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;

In view of the many criticisms of this original wording, it has been changed to the following in the latest version available since March:

‘AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

As most people point out, this is in line with the OECD’s previous definition of an AI system:

… is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

The original definition of the AI Act’s draft referred to Annex I, where the following list was added:

ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES
referred to in Article 3, point 1

(a)Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;

(b)Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

(c)Statistical approaches, Bayesian estimation, search and optimization methods.

In comparison, the first pages of the new version (Position of the European Parliament (12)) contain a longer resolution, which partly reflects the above and partly incorporates the additional conceptual framework of the previous Annex I. It also reflects in part what has been said above. It mentions, for example, machine learning systems as an important form of artificial intelligence governed by law, and another major branch of today’s popular AI modeling, which is logic or knowledge-based.

Although at first sight it may seem like a step backward, it is promising that the draft no longer specifically mentions only the approaches known today (deep learning, inductive programming, etc.). While it is true that the previous Annex I referred to them only by way of example, the new version no longer gives the impression that the list is exhaustive. This is an advantage, given that intensive research is constantly being carried out to find new architectures, which could make such a list obsolete at any time.

If we look only at the definitions of AI systems, the AI Act approaches the concept from a practical rather than a theoretical point of view.  The draft also mentions in the introductory resolution the requirement that AI systems should be strictly distinguished from solutions that operate with purely hard-coded rules. This is not sufficient, but it is an extremely important constraint, as the same dichotomy is often used in software design.

In an earlier version of the draft, there was a lot of criticism of the way in which the text defined the criteria for clearly distinguishing artificial intelligence systems from simpler software.  Based on the relevant analyses, the most problematic was the definition of the expected level of “autonomy”. This is mainly because the AI Act did not specify the degree of autonomy that can only be specific to an AI system. By comparison, a robotic vacuum cleaner can respond to its environment autonomously without human intervention, yet it may not have any AI operating within it. The new version seems to retain this kind of undercapitalization unchanged:

AI systems are designed to operate with varying levels of autonomy, meaning that they have some degree of independence of actions from human involvement and of capabilities to operate without human intervention.

It should be noted, however, that the new version only includes this in the previous resolution, which is presumably intended to put the legislation in context, rather than to provide a definition of the degree of autonomy expected. Interestingly, autonomy for machine-learned models is also included as a risk factor for general-purpose AI models (ibid., Chapter (110)).

Another point of interest is the section of the draft (67) that deals with the data used to teach high-risk AI systems. The text states (highlights by the author):

High-quality data sets for training, validation and testing require the implementation of appropriate data governance and management practices. Data sets for training, validation and testing, including the labels, should be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose of the system.

We will also read later about the possible harmful consequences of bias in the data (in relation to certain fundamental rights and the prohibition of discrimination). In this context, it is of course obvious why the legislator requires the teaching data of machine learning models to be error-free.

The interesting thing is that the statistical characteristics of a data set can often be determined in an exact way, although not easily. For example, if you are building a database of voters for some purpose, you can measure whether your data are representative of the whole population by looking at the demographic variables. However, for some typical machine learning tasks, “completeness” and robustness to error are much vaguer.

Take the example of sentiment analysis. Here, the training data may consist of sentences labeled by humans, which may be given positive, negative, or neutral labels. These indicate the emotional content of the sentence. Typically, manual tagging will introduce errors into the data. This can be due to inattention, poorly worded annotation instructions, or even during the processing of the data.

A common metric to measure the “reliability” of a dataset is the inter-annotator agreement (IAA), which gives us information about how reliable a label for a piece of data is. For larger projects, such as datasets created for research purposes, this type of quality assurance is standard practice. For such a measurement to be feasible, each piece of data needs to be independently assessed by at least two people, perhaps one reviewing the decisions of the other. This, of course, requires double the work time and has a significantly higher cost than “one-off” annotation. For this reason, it is common practice to check only a certain percentage of such databases in this way. Behind this is often the intuition that the quality measured on a randomly sampled subset is an adequate representation of the whole dataset.

This may be of interest to the AI Act because it will be necessary to develop some kind of metric, or at least some kind of evaluation exercise, to measure the “degree” of freedom from error. This will be far from a trivial process based on current development practice. As on many other points in the context of the provisions of the draft, it will be interesting to see how the practice will evolve in the application of the law.

Overall, therefore, the AI Act in its current form (which is unlikely to change much) has evolved a lot from its initial state. The draft contains several key definitions whose interpretation will have a significant impact on the future of AI development in the EU. Perhaps the most important question is how the legislator’s intentions will find their way into the everyday lives of EU citizens.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email