More Than Half the Internet Is Machines – What Automated Traffic Means for Credibility and Public Discourse (Part II.)
From this perspective, it becomes clearer why so-called inauthentic use appears as a distinct risk category in the European Union’s Digital Services Act. Fake accounts, automated or partially automated behaviors, and artificially amplified distribution patterns are not singled out because
Artificial Intelligence and Fundamental Rights: A Multidimensional Legal Perspective is Needed
The implementation of Artificial Intelligence in managerial contexts represents a significant transformation that raises deeply complex questions from a contemporary legal and constitutional perspective. Although technology is frequently presented as a neutral tool for productive optimization and increasing organizational efficiency,
More Than Half the Internet Is Machines – What Automated Traffic Means for Credibility and Public Discourse (Part I.)
When a majority of online activity is produced by automated systems, the question is no longer whether bots exist, but what kind of internet we are actually using. The “dead internet” idea captures this unease, even if its more extreme
AI Plus Humans or AI Without Humans: Where Does the Deloitte Model Fail? – Part II.
The Deloitte incident (see Part I.) shows that the real risk of generative AI in professional environments is not the technology itself, but the institutional conditions under which it is deployed. AI does not eliminate responsibility—it merely reshapes where responsibility
AI With Humans or AI Without Humans: Where Does the Deloitte Model Fail? – Part I.
Expensive, government-commissioned reports built on fabricated studies may sound at first like a tabloid-style “AI scandal,” but the story runs much deeper. The recent Canadian and Australian Deloitte cases—involving uncontrolled Gen AI use without validation—do not simply reveal technical errors. They
“Blind” Models, Invisible Biases: the Limits of Algorithmic Fairness
Modern machine learning systems have become part of our social infrastructure, which means that the biases they transmit are not just technical glitches but real legal and ethical risks. In practice, bias often persists even when protected attributes are formally
AI Act vs. reality: technical obstacles to compliance and the possible workarounds – Part II.
Fundamental-rights compliance: everyone wants something different A persistent tension emerges between industry actors and civil rights groups. Industry argues that it is unclear why they are held to detailed human-rights compliance, especially on anti-discrimination, while other actors with comparable risk face
AI Act vs. reality: technical obstacles to compliance and the possible workarounds – Part I.
The EU’s AI Act promises a unified framework for trustworthy AI, yet day-to-day implementation is already colliding with overlapping rules, uneven obligations and missing standards. Interviews and recent analyses show the biggest friction points: a one-size-fits-all approach, transparency gaps for
Not Neutral – Detecting Political Tilt in Large Language Models
Large Language Models are fast becoming the mediators between information sources and users, so their value signals matter. A recent peer-reviewed study in Nature’s Humanities and Social Sciences Communications finds a measurable rightward shift in models’ political placement across versions,
From Text to Action: World Models in Practice (Part II.)
In practical applications and in the context of World Models (WMs), industrial robotics is usually mentioned first, and for good reasons. A robot that relies on a world model does not merely repeat a pre-programmed sequence of movements. It notices