
Making the Invisible Visible: Explainable AI in Everyday Life (Part I.)
Artificial Intelligence (AI) is no longer just a forward-looking technology. It has become an integral part of our daily lives. It influences decisions, automates processes, and in many cases, makes choices on our behalf. Think of situations where an algorithm evaluates our credit application, screens our job application, or suggests medical treatment. These systems are often based on neural networks or deep learning models, which are frequently referred to as “black-box” models due to their complexity This term highlights the fact that the path leading to a given outcome is not transparent to observers. While we might know the inputs and outputs, the logic behind the decision remains hidden. This is not only a theoretical issue but also a practical and social concern, especially when decisions have a direct impact on people’s lives.
It is not enough for algorithms to produce accurate results. It is equally important to understand how and why these systems reach their decisions. This is the goal of Explainable Artificial Intelligence (XAI), which includes a variety of techniques and methods designed to make the internal workings of AI models more understandable. The aim is not just to create intelligent systems, but ones whose decisions are also transparent and interpretable. In other words, people should be able to follow and understand how a particular outcome was produced. This is not just a technical challenge but a fundamental social need, since trust in AI systems depends largely on how well their decisions can be explained.
The need for XAI is becoming more than just a technical or usability issue. It is increasingly seen as a legal, ethical, and societal requirement. This is especially true in areas where AI decisions affect individuals’ lives, rights, or opportunities. The European Union emphasizes the importance of transparency, particularly for high-risk AI systems. The AI Act requires such systems to operate in a documented, interpretable, and verifiable manner. However, this does not mean that technical XAI tools must be used. The focus is rather on ensuring that these systems can be audited and reproduced, and that their mechanisms are understandable to those who have the right to know how they work. Article 22 of the GDPR also addresses automated decision-making and states that individuals must be informed clearly and intelligibly about how such systems operate and what the consequences of their decisions might be. Although it does not explicitly mention a “right to explanation,” the intention is similar: to ensure that people are not left at the mercy of opaque automated processes.
Although regulations like the GDPR and the AI Act outline transparency and disclosure requirements, these are not always technically detailed or clearly defined. They tend to state principles and objectives rather than prescribe specific tools. In this context, XAI can help meet these goals, even if it is not legally mandated. Ethical and societal aspects are just as important. For example, if someone is unfairly denied a loan or receives an incorrect medical recommendation from an algorithm, they have a legitimate expectation to understand why the decision was made. This matters not only for legal certainty but also for maintaining public trust in AI systems. Unsurprisingly, users tend to trust systems less when they operate in an opaque manner.
From a developer’s point of view, explainability helps fine-tune systems and identify errors or biases. It provides insight into how specific inputs influence outcomes and allows for objective evaluation of the system, whether in internal development or external audits.
As AI has advanced, models have grown dramatically in size. Modern systems, especially deep neural networks, often have hundreds of billions of parameters. With this level of complexity, it is nearly impossible to manually trace all decision paths. Earlier models, such as BERT, were simpler in architecture, with a limited number of layers and shorter input lengths. This made it easier to analyze components like attention mechanisms and understand how decisions were made. In contrast, today’s large language models (LLMs), like GPT-4 or Claude, operate with many more parameters and handle much longer contexts. These systems can exhibit emergent behaviors, meaning they may display patterns that are not directly predictable from their architecture. This makes their decisions even harder to interpret and increases the demand for sophisticated XAI tools capable of handling such complexity.
A 2019 study illustrates this challenge. A healthcare decision-support system was found to be biased based on gender. The model had been trained on data that reflected existing social inequalities, and these biases became embedded in the system’s behavior. This example shows that when an algorithm’s decision is questioned, a clear explanation is often unavailable. Developers or providers may cite business confidentiality or technical complexity. However, such a lack of transparency can erode trust in the system and lead to serious societal consequences, especially when algorithms reinforce existing inequalities in ways that users do not even notice.
This highlights an important issue: when we question an algorithm’s decision, we often receive no satisfactory explanation in return. Developers or service providers frequently cite business confidentiality or the technical complexity of the system as reasons for their lack of transparency. However, this kind of opacity does more than simply erode trust in the long run. It can also have serious consequences at the societal level, particularly when algorithms sustain or even amplify existing inequalities in ways that users are unaware of.
One of the practical challenges of explainability is that people have different understandings of what a “clear explanation” means. An end user typically needs simple, visual, and easy-to-understand feedback. A decision-maker, on the other hand, requires detailed, documented, and traceable reasoning. Developers naturally rely on deeper, technical-level analyses as a starting point.
Explanations can also differ in scope. We can distinguish between global and local approaches. A global explanation describes how the model operates in general, for instance, whether it tends to prioritize income over age in loan evaluations. A local explanation focuses on a specific decision, shedding light on why a particular outcome was reached in an individual case. Both types are important, though they serve different purposes. Global explanations are useful for regulation and system auditing, while local explanations promote transparency and fairness in individual decisions.
In the next part, we will introduce a concrete method that addresses these challenges. SHAP is a tool that allows us to clearly and intelligibly interpret how a model arrived at a specific decision. It is not only a technical innovation but also one of the most practical and widely applicable tools in the field of explainable artificial intelligence.
István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.