Submission
Privacy Policy
Code of Ethics
Newsletter

István ÜVEGES: Europe’s AI Legislation Sparks ‘Black Box’ Debate: Unraveling Tech’s Gifts and Risks

As the European Parliament tackles AI regulation, the mysterious ‘black box’ phenomenon demands attention, raising concerns over AI’s opaque nature and potential misuse. Unraveling this enigma becomes crucial as it impacts powerful algorithms like Large Language Models, shedding light on transparency, accountability, and ethical use in our ever-changing digital landscape. How does European legislation address the ‘black box’ enigma? 

In the world of Artificial Intelligence, “black box” is a metaphorical term that refers to the fact that the way a system works is not necessarily transparent or fully understood by humans. To understand this phenomenon, let’s see the processes behind today’s most powerful algorithms (e.g. ChatGPT developed by OpenAI), whereby AI acquires some kind of “knowledge”.

Artificial Intelligence research intertwines various disciplines such as computer science, psychology, neuroscience, and more, encompassing diverse sub-fields like robotics, natural language processing, expert systems, and computer vision, making it a profoundly interdisciplinary and multifaceted endeavor.

The world of machine learning is, today, probably the most important of these, both in research and in related industrial development. Machine learning, viewed from above, consists of three vital elements: an algorithm, sample training data, and the resulting model. The algorithm’s prowess lies in learning new ‘knowledge’ from vast examples, uncovering imperceptible patterns in the data. As this ‘knowledge’ takes shape, it finds its home within the model, paving the way for AI’s transformative capabilities.

The schoolbook example is how Large Language Models (LLMs) work. The impact of LLMs on the modern world is clearly illustrated by the GPT-4 language model on which ChatGPT is based, but also by the LLM behind Google’s recently announced Bard (intended to be a market leader) conversational AI tool. The most common task of LLMs is to collect, interpret and store information about human language and/or information that is present in human language in written form. Such information could be, for instance, the grammatical rules of the language, parts of speech of individual words, the meaning of the words that make up the language in relation to the language itself, or the set of correct answers to questions, if the model is expected to work like a chatbot. The training data will be huge text databases, and the result will be a model (mentioned above) that can be further trained for a number of more specific tasks (such as sentiment analysis, question answering, etc.).

The problem with black box models can be understood in two ways. The first is the source of opacity. Imagine that the algorithm is not in the public domain, making its exact mechanism of operation unknowable to others except for the developer. Even if this would be the least possible scenario, it is still spooky. In the second case, the nature or source of the data used to train the language model is unknown to the general public. An iconic example of this is the well-known ChatGPT, which, according to some sources, uses about 570 Gb of text data to train, but the exact nature of this data is not public, or at least the developer has not yet provided any precise information about it, nor is the data itself available on any public platform. In the latter case, it is the model that is the result of the process that is not accessible. This, of course, makes it impossible to analyze or interpret the information stored in it using different procedures.

In most cases, these decisions are based on some legitimate business interest of the developing company, such as maintaining a competitive advantage or the expectation of a return on investment. If a complete machine learning process can be easily reconstructed by other market players (a basic requirement for research projects, for example), such competitors can gain a significant advantage, for example by no longer having to develop their own solutions from scratch, saving considerable time and money.

In addition to the above, the term black box can also refer to another feature of machine learning, which is particularly important in the world of neural networks and deep learning. The three possibilities mentioned so far can be attributed to the emergence of the black box character, which is essentially due to human intention, and therefore their solution is trivial in principle (even if difficult to implement and enforce in practice); making the individual components public. However, in many cases, the internal workings of the models themselves can become so complex that even if they are made public, it is questionable how it is possible to interpret the individual results that arise from their use.

Again, taking the example of language models, and the neural networks that underpin most language models, the inner workings of such models are in many cases opaque even to experts. Models are generally given some input to which they produce an output in response. In image processing, for example, the input might be an image and the output might be a decision about what is shown in the image. In language models the input may be a sentence and the output a decision about the emotional content of that sentence (emotion analysis). But a similar example in the field of law is the automatic generation of a summary of the content of court decisions, or the automatic identification of their structural elements (case history, court decision, etc.). In many cases, the link between input and output is provided by hundreds of millions or even billions of parameters inside the model. These are actually connections between the neurons in the network with a numerical value that the model uses to encode the information it learns and that helps it make decisions. In order to understand how input is transformed into output, we need to understand the relationship between these connections, the values they store, and the input-output pairs. Obviously, this is an extremely difficult task (if only because of the number of parameters involved), which is an intractable problem for human reason alone.

If the background to the decisions of a machine learning model cannot be known due to deliberate decisions or simply due to some inherent characteristics of the technology, this can lead to a number of adverse consequences. 

The bias of AI reflects the phenomenon where, due to conscious development decisions, unnoticed errors, or even biases introduced into the data with malicious intent, the decisions given by the model tend to be disadvantageous for a group (e.g., minorities). To put it simply: if a bank uses a machine learning model to rate a loan, the model may easily conclude that the return on the loans extended in a particular area is low and that lending is risky and should be avoided. If the majority of the population in the area belongs to some minority (e.g., ethnic) group, it is easy to generalize that members of the minority group may never receive a positive credit rating because of an (even accidental) correlation in the training data. Real-life problems arising from AI bias are in many cases more nuanced than the above and very difficult to detect and can therefore cause significant harm if not handled effectively, in addition to cases such as the above which may also violate the prohibition of discrimination.

The lack of transparency and accountability is a particular problem for neural models, based on the unanalyzable relationship between input and output. For medical applications, it is of particular importance that, for example, in the case of a diagnosis, not only the outcome but also the chain of causes and conclusions leading to it are made available to experts. If the reasons for the model’s decisions are not known, the question of responsibility for erroneous decisions remains unclear.

Compliance with legal and regulatory requirements can be difficult when using black box AI systems. Some regulations, such as the GDPR, guarantee individuals the right to an explanation and require transparency in automated decision-making. However, if the inner workings of an AI system cannot be interpreted or explained, compliance with similar regulatory requirements may be simply impossible.

Targeting the challenges of this rapidly evolving technology, the forthcoming European regulation focuses on the classification of AI-based solutions into risk categories and the introduction of restrictions on the use of these classes. In an era of a data-driven economy, where the main value is the data collected from users, a looser regulatory environment is a competitive advantage, even if the associated moral and ethical concerns and the social harms of irresponsible use of technology may negate these initial benefits in the long run. The key issue is to find a balance where the regulatory environment makes large companies much more accountable than at present for the consequences of the technologies they use and the data they collect for them, but supports responsibly deployed AI-based solutions to increase competitiveness and harness the real benefits of such technologies. Given that AI is perhaps the fastest evolving field today, this distinction may become more difficult to make with each new technical advance.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email