Submission
Privacy Policy
Code of Ethics
Newsletter

Artificial Intelligence and Fundamental Rights: A Multidimensional Legal Perspective is Needed

The implementation of Artificial Intelligence in managerial contexts represents a significant transformation that raises deeply complex questions from a contemporary legal and constitutional perspective. Although technology is frequently presented as a neutral tool for productive optimization and increasing organizational efficiency, contemporary normative analysis reveals instead that AI systems interact decisively with power structures, organizational hierarchies, and pre-existing economic relationships, producing differentiated and often unequal effects on the working population. This phenomenon requires a coherent and multidimensional legal framework that simultaneously considers fundamental rights and principles of good administration – considering the evolution of European regulations on algorithmic governance – as well as the protection of workers’ rights in the contemporary digital context.

The Issue of Technological Neutrality: Progress Is Not Equal for Everyone (?)

The technological neutrality problem constitutes the first critical node of contemporary legal analysis. Artificial intelligence technologies, by now, do not operate in a regulatory or social vacuum but are incorporated into contexts traversed by asymmetrical power relations that significantly condition their concrete functioning and their real effects on people. The legal principle of technological neutrality, traditionally invoked in matters of technological regulation, proves deeply inadequate for understanding the contemporary phenomenon of decision-making automation. This principle implicitly assumes that the introduction of new tools does not modify the underlying structures and pre-existing power dynamics. For example, the employment relationship between employee and employer, or the relationship between the one who delivers the news and the one who receives it.

On the contrary, the use of algorithmic systems has empirically and legally demonstrated that it replicates and amplifies forms of discrimination already present in the system with considerable multiplicative effects. When the data used to train algorithms comes from historical processes characterized by systemic discrimination (the Loomis case in the United States represents the quintessential example of when human bias is replicated by machines), algorithms, if not skillfully managed, perpetuate such discrimination on a mass scale, applying the same flawed logic to thousands of decisions simultaneously. This phenomenon has been documented in multiple areas of technology application, from content moderation processes in the digital sphere (such as social networks) to the evaluation of work performance (particularly regarding women), extending to jurisdictional areas where personal liberty is at stake.

Empirical and normative research also reveals that the distribution of impacts derived from technological implementation is not equitable from a social and geographical point of view, constituting a fundamental issue of distributive justice. The effects of AI systems strike differently depending on the social, economic, gender, and geographical positions of those involved. Nevertheless, the final result is still the same, because the machine does not have the capability to understand the complexity of our reality.

For example, back in the labor sphere, workers endowed with bargaining power and organizational control can use technology strategically to improve their working conditions and personal productivity, while precarious, subordinate workers lacking decision-making power suffer from it through intensified surveillance and increased daily performance pressure.

This understanding allows us to recognize how the same technology, without the correctives typical of pluralist-social legal systems, can produce advantages for some groups of workers while aggravating conditions of vulnerability for others.

The Principle of Good Administration

This view of critical issues through an intersectional lens assumes crucial relevance in light of the fundamental principle of good administration, enshrined in Article 41 of the Charter of Fundamental Rights of the European Union and recognized by the consolidated jurisprudence of the Court of Justice as a general principle of Union law.

In this regard, jurisprudence has definitively recognized that the principle of good administration imposes binding obligations regarding transparency, impartiality, rationality, and accountability of the administrative decision-making process in all its forms. When organizations introduce Artificial Intelligence systems without a thorough preventive assessment of real needs, without meaningful consultation with those affected, and without a methodical and planned implementation process, they manifestly violate the fundamental prerequisites of proper administration. The common practice of acquiring mass software licenses without a serious preliminary cost-benefit assessment contrasts specifically with the requirement of administrative rationality and represents a form of structural irrationality in the organizational decision-making process.

Such conduct also violates the principle of proportionality, as the massive implementation of a tool without preventive analysis constitutes a clear violation of the balance between the objective pursued and the instrument employed.

Protection of Workers’ Fundamental Rights

The protection of workers’ fundamental rights represents another pillar of contemporary legal analysis on the regulation of Artificial Intelligence. The introduction of AI systems in work contexts produces significant implications for the right to rest, the limitation of working hours, the protection of private and family life, and safeguarding against constant and pervasive surveillance and control that aligns poorly with liberal principles. The Charter of Fundamental Rights of the European Union guarantees, in Articles 3, 7, and 8, the right to the integrity of the person, the right to respect for private and family life, and the fundamental right to the protection of personal data.

These fundamental principles find real expression in the employment context through European legislation on health and safety in the workplace, particularly Framework Directive 89/391/EEC, which represents an important, binding instrument oriented towards the effective protection of workers. This Directive, already more than thirty years ago, attributed to employers the concrete and non-delegable obligation to guarantee the safety and health of workers with respect to all significant occupational risks, including those of a psychosocial nature and organizational stress.

The use of algorithmic management systems that intensify the overall workload and multiply time-consuming micro-tasks increases daily performance pressure and significantly reduces decision-making autonomy, generating actual and scientifically documented psychosocial risks. These include chronic work-related stress, professional burnout syndrome, occupational depression, and sleep quality alterations. Occupational, organizational, and sociological studies document that the introduction of these systems does not effectively reduce overall workloads but significantly intensifies them, redistributing obligations towards precarious and subordinate workers.

Fighting Algorithmic Discrimination

The non-discrimination topic assumes a particularly complex dimension through the theoretical lens of contemporary intersectional analysis. European anti-discrimination legislation, while providing important protections in consolidated positive law, reveals itself in practice to be inadequate in effectively regulating algorithmic discrimination. First, algorithmic discrimination can operate indirectly, utilizing correlated data and inferable results (which therefore do not fall within the category of strictly protected data) that are not directly protected by anti-discrimination regulations but are highly correlated with protected characteristics such as ethnicity, gender, sexual orientation, or religious affiliation.

In addition, the structurally opaque nature of modern algorithms – the so-called “black box problem “ – makes it extraordinarily difficult to identify, document, and prove the causal link between the algorithm’s functioning and the concrete discriminatory effect produced. People subjected to discriminatory algorithmic decisions, sometimes complicit with the limits of copyright in cases of licensed algorithms, face almost insurmountable procedural obstacles in proving and stopping the discrimination. In response to this fundamental procedural challenge, the European regulatory framework has responded as best it could: for example, the right to explanation in automated decisions has been recognized as a fundamental protection mechanism and a concrete tool of procedural democracy. Article 22 of the GDPR prohibits decisions based solely on automated processing which produce legal effects concerning him or her or similarly significantly affect him or her. Similarly, Article 15(1)(h) of the GDPR guarantees the right to obtain “meaningful information about the logic involved “ in automated processing.

Also, the recent AI Act has moved in this direction, placing serious limits on those systems classified as “high risk“, which concern the untouchable dimensions of the human being, such as personal freedom, dignity, and physical integrity.

Furthermore, European jurisprudence has also intervened on themes of this tenor: specifically, the Court of Justice of the European Union, in particular with the Dun & Bradstreet v. Austria ruling of February 2025, clarified that information relating to algorithmic proceedings must be “concise, transparent, intelligible and easily accessible“.

Anyway, in this regard, it will be necessary to pay attention to the developments introduced by the Digital Omnibus, to understand whether the policy of ‘de-bureaucratization’ will lead to a reduction in protection. The next few years will be crucial to understand where the EU is going in this matter.

Conclusions: Towards an Integrated Regulatory Approach

However, complicit with the swirling speed of technological development and indecision regarding problem resolution, the question of the fair distribution of the benefits of automation represents a social, economic, and regulatory challenge that is still critical for contemporary European legal systems.

The risk consists in the fact that the well-being generated by technology is counterbalanced by new obligations, new abuses, and new injustices. We need to understand if States have the will to respond to the private powers that manage these tools to seek new libertarian and guaranteed paths with which to safeguard their citizens from these means.

In conclusion, the implementation of artificial intelligence requires a vision reminiscent of Homer’s Ulysses: of multiform resourcefulness and countless qualities. Passing through digital sovereignty, the non-neutrality of algorithms, and the effective protection of fundamental rights, the need for control and sharing of views is highlighted. Only through a shared approach open to various points of view – both national and of the social realities within them – we can imagine building a sustainable model for the use of automated tools in a democratic-libertarian perspective.

In summary, to make the message even clearer, it may be useful to reformulate the important principle “keep human in the loop“ into the new principle “keep humanity in the loop“.


Matteo Paolanti is a PhD candidate in Comparative Constitutional Law at the University of Siena’s Department of Law, where he began his doctoral studies in 2023. His research centres on pressing intersections of law and technology, particularly the constitutional implications of artificial intelligence, digital rights, and regulatory challenges in emerging tech governance.

Paolanti’s academic trajectory reflects a strong and sustained commitment to constitutional scholarship. Affiliated with the Law Department of the University of Siena, he has, in recent years, authored numerous research articles published in leading peer-reviewed journals within Italian academia. Actively engaged in academic networks, he has participated in conferences and scholarly events both in Italy and across Europe, including at the University of Vienna and the University of Vilnius.

In the spring of 2025, he spent two months in Heidelberg conducting research for his final dissertation at the Max Planck Institute for Comparative Public Law and International Law. The core objective of this research stay was to deepen his understanding of the comparative method by engaging with it from a broader and more international perspective, within one of the most prestigious research environments in the field.

Beyond his institutional research activities, Paolanti contributes to academic debate through targeted writings, such as blogposts, and comments on reviews, that connect constitutional theory with pressing socio-political challenges. His work explores issues such as the contemporary crisis of democracy, digital literacy, and the development of the digital revolution in a human-centric and rights-oriented framework.