Between Child Protection and Mass Surveillance: The EU’s Chat Control Debate
Since 2023 the European Union has been preparing a regulation that aims to strengthen the protection of children online at the expense of digital privacy. According to the proposal, not only on mobile phones but on every digital device and platform where private messaging takes place, every message would be scanned by machine learning algorithms before it is even sent. From the very beginning, the debate has revolved around a single question: can we, or should we, sacrifice digital privacy on the altar of online safety?
In the autumn of 2025, a conflict rooted in technological development and digitalization once again came into the spotlight in the EU. The renewed controversy concerned how to protect children from online abuse while at the same time safeguarding fundamental rights, above all digital privacy and the possibility of encrypted communication, which is a core element of that privacy. The situation closely resembles the challenges of moderating content on social media platforms, where the recurring question is how regulation can take place without disproportionately restricting freedom of expression. The same dilemma arises here as well: in the name of a legally legitimate aim, how far should the state be allowed to go in interfering with digital communication? The debate surrounding the proposal known as “Chat Control” (formally the CSA Regulation or CSAR, the Regulation to Prevent and Combat Child Sexual Abuse) is rooted precisely in this tension.
The aim of the proposal is to enable a more effective response against the online sexual exploitation of children. According to the European Commission, the operators of online communication platforms and messaging applications are currently not doing enough to detect sexual abuse taking place in the digital environment, especially when such activity occurs through end-to-end encrypted (E2EE) systems. The draft seeks to address this shortcoming by introducing so-called client-side scanning, which would examine messages on the user’s device, with automated tools, before encryption takes place. In practice this would mean that providers would build dedicated software modules into their mobile or desktop applications. These modules would operate within the device itself, either at the level of the operating system or through the application’s own framework, automatically comparing outgoing media files with patterns created by pre-trained Artificial Intelligence models or hash databases. Hash-based technologies, such as Microsoft PhotoDNA or Google CSAI Match, make it possible to identify abusive material without explicitly reanalyzing its visual content. At the same time, machine learning models are already capable of filtering unknown or modified material that could be harmful. This, however, is not only an advantage: more sensitive filtering also means an increased risk of false positives, which could subject the content of innocent users to unnecessary scrutiny. Although the current version covers only images, videos, and URLs, it could later be extended to text and voice messages also.
Such a regulation would have a drastic impact on the operation of digital service providers. Platforms like WhatsApp, Signal, or Telegram would be required to integrate the filtering systems mandated by the CSAR into their applications, and failure to comply could even result in their exclusion from the EU digital market. Yet the mandatory introduction of such software backdoors fundamentally calls into question the credibility and integrity of end-to-end encryption, since messages would be analyzed by a system before they ever undergo the encryption process.
Critics agree that child protection is an important goal, but they argue that the proposed technical solution would cause an excessively severe infringement of digital privacy. Algorithmic filtering presupposes that all user messages, regardless of whether they are problematic or not, would be automatically scanned. According to some experts, this kind of mass, preventive content filtering amounts to an algorithmic surveillance system that conflicts with the fundamental rights enshrined in the Charter of Fundamental Rights of the European Union.
More than 300 international researchers and data protection experts had already warned in an open letter that the proposal could weaken digital encryption and restrict civil liberties. The authors of the letter considered it particularly concerning that such filtering systems are not sufficiently reliable and often generate false positives, which can place the digital content of innocent users under unnecessary scrutiny.
The European Data Protection Supervisor (EDPS) has also expressed concerns earlier about the regulation. According to the office, the proposal in its current form is not consistent with EU data protection law, particularly the GDPR, which imposes strict requirements on proportionality and purpose limitation in data processing. Similarly, the European Data Protection Board (EDPB) emphasized that child protection objectives cannot override the safeguarding of digital privacy and urged a reconsideration of the proposal.
The political support for the regulation currently appears to be deeply divided. Several EU member states, including Spain, France, and Hungary, support the proposal, while others, such as Germany, Austria, and the Netherlands, have taken a strong stance against it. Germany’s rejection may prove decisive, as the country could form a blocking minority in the Council, effectively preventing the regulation from being adopted.
The reaction from citizens and civil society organizations has also been intense. Several international digital rights groups, such as European Digital Rights (EDRi), Global Freedom of Expression, and Access Now, argue that if adopted, the proposal would make systematic monitoring of private messaging lawful in the European Union for the first time. This could set a precedent for other countries as well, particularly in systems where privacy protection is already on shaky ground.
According to critics, the introduction of the regulation would be extremely risky not only from a data protection perspective but also from a technical one. Image recognition systems based on artificial intelligence are still not capable of reliably distinguishing between illegal and harmless content. As a result, there is a high likelihood that innocent material will also be flagged, while some genuinely harmful content may remain hidden. In addition, implementing such a system would involve significant technical challenges and costs, especially for smaller market players who lack the necessary infrastructure.
Moreover, the historical pattern of surveillance technologies shows that they almost never appear under their real name. They are often introduced under the banner of security, but history demonstrates that once the infrastructure is in place, its use almost always extends beyond the original purpose. Weakening encryption “just a little” is not a small compromise but the beginning of a process. If we accept that our private messages can be examined on our devices before we even send them, we have effectively introduced mass surveillance, only under a different label.
Ultimately, the long-standing debate on Chat Control is about how a digital society can balance the protection of children with respect to individual rights. This dilemma is not new: defining the boundary between the common good and individual freedoms has always been one of the most difficult questions in a democratic legal system. The CSAR proposal, however, adds a new dimension to this debate. Through artificial intelligence and automated content filtering, not only lawmakers but algorithms themselves now participate in deciding what counts as acceptable digital behavior. And while the goal of protecting children is noble and justified, the chosen tools raise the question of what price we are willing to pay to achieve it.
István ÜVEGES, PhD is a Computational Linguist researcher and developer at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences. His main interests include the social impacts of Artificial Intelligence (Machine Learning), the nature of Legal Language (legalese), the Plain Language Movement, and sentiment- and emotion analysis.