Submission
Privacy Policy
Code of Ethics
Newsletter

A Roadmap to Advancing Youth Safety in the Age of AI

While I have always been an optimist when it comes to the transformative power of technology on child safety, I must acknowledge that the potential benefits are currently outweighed by the harm that humans are using AI for. To advance child safety online, we need a systemic overhaul of how we approach the issue, working together to share effective solutions to combat AI addiction, human loneliness and isolation, and the rapid spread of AI-generated deepfakes.

Heading Down a Dangerous Path

The creation and dissemination of non-consensual sexual AI-generated imagery —mostly using Grok — has become a serious problem, furthering gender-based violence and creating an atmosphere of fear in schools, with inadequate resources devoted to teaching young people AI literacy skills or preventing the use of nudifying apps against their peers. In 2025 alone, at least 1.2 million young people have disclosed that their images were manipulated into sexually explicit deepfakes. Because of the issue’s prevalence, UNICEF released a statement declaring that AI-generated deepfake sexual content involving minors is abuse and should be treated with the same urgency as offline harm. Civil society is increasingly devoted to the cause of combating the dissemination of child sex abuse material online, a phenomenon that is growing in prevalence due to easily available deepfake technology, as well as the use of AI to groom minors online.

There are laws being enacted worldwide to combat the spread of such images. For example, in the U.S., the “ Take it Down Act “ was signed into law, making it a federal crime to publish nude or explicit images of children. However, like many other laws governing behavior on social media–such as the EU’s DSA—this Act is opposed by many, because it is seen as restricting free speech.

The easiest solution to the problems mentioned is to restrict children’s access to social media worldwide, and we have recently seen many European countries announcing legislative proposals to that effect. However, it is not a foolproof method of protecting citizens. Children will eventually need to be introduced to the online world for their future work, to learn the skills required to avoid becoming addicted to chatbots, withstand being manipulated by the algorithm, and to avoid other associated harms as adults.

This goal of balancing protection with education is supported by some companies creating age-prediction features to reduce sensitive content when a minor uses a chatbot. To further youth safety online, Discord announced that it is rolling out stronger default safety settings for teen users, and Roblox shared a youth-friendly guide to its community standards. Transparency, clarity, and age-appropriate communication are foundational in the long run to enhance online child safety and facilitate digital and AI literacy for young users.

However, we should acknowledge that there are some features —such as nudification provided by Grok —that have no positive societal effect of any kind for any age group. Adults using this feature to create non-consensual sexual imagery of other adults also constitutes a violation of rights. This is not the direction in which AI should be developed. In order to unify international hotlines dealing with non-consensual sexual AI-generated imagery, the “No to Nudify” campaign was recently launched.

Steps Taken to Combat AI Harms

To combat the dissemination of child sex abuse material online, and to detect, triage, and manage harmful content online, ROOST.tools introduced Coop, an open-source review and moderation tool. ISO/IEC 27036-3 standard —also known as The Safe Framework —also resurfaced as a vital tool of auditable trust and safety management systems.

Other than the dissemination of deepfakes, AI is currently being used to substitute human relationships for children: 81% of kids aged 11-16 say they use AI chatbots, and two out of three see them as a friend. This is a natural response to the loneliness children are experiencing at an alarming rate, as they seek to fulfill their emotional needs through developing positive relationships with chatbots. A study in 2024 found that chatbots can indeed reduce the feeling of loneliness and be used therapeutically, but they may also perpetuate harmful ideas and become addictive. It should also be noted that if children confide in AI systems as friends, developers have certain obligations. There is a case to be made that there should be emotional transparency disclosures, restrictions on certain features of anthropomorphic design, and guardrails around dependency reinforcement. These are no longer abstract ethics debates, but actual consumer protection questions, privacy issues, and AI governance problems.

The ultimate question is whether we can build systems that protect children by design rather than by patchwork response. To this end, further developing concepts such as AI governance, age assurance, digital literacy, and systemic accountability is absolutely necessary.

Microsoft recently commemorated 10 years of its Global Online Safety Survey, sharing new data on rising AI-related risks. Over the past decade, the survey has tracked exposure to scams, bullying, harassment, and harmful content. But AI technology introduces new patterns, such as synthetic media that blurs truth and fiction, AI-generated harassment at scale, and emotional reliance on conversational agents. What stands out is that Microsoft’s pivot toward co-design workshops with students in India and Singapore is centered around AI use. This signals something critical: young people are not just users; they are stakeholders too. If AI tools are shaping youth identity, education, and relationships, then youth voices must shape AI policy and product design.

Historically, content moderation has been reactive, crisis-driven, and platform-specific. However, auditable systems are necessary to fight against AI harms. This means documented risk assessments, structured response protocols, measurable performance metrics, and independent verification. These are critical components of online child safety, as children cannot depend on inconsistent, opaque moderation practices. Safety is becoming more technical and more collaborative. Open-source collaboration can significantly lower safety barriers, and shared infrastructure is proving to be more useful than stand-alone solutions.

The last barrier to preserving child safety is age verification. While it is a great solution in theory, poorly designed systems can over-collect biometric data, create centralized identity databases, and be easily bypassed. Therefore, age assurance systems must function to be both privacy-preserving and difficult to evade: a feature that is difficult to develop.

What Comes Next

Public debate about children and technology often swings between complacency and moral panic. Articles and commentaries across media reflect a growing reckoning on children and digital life, and it is definitely a conversation we, as a society, must be having. The ultimate goal of collaboration between stakeholders, the future landscape of digital laws, and the efforts to enhance AI literacy is to build an infrastructure that effectively mitigates harms associated with AI use.

A promising shift toward this goal can be seen through the development of default protections, youth-readable governance, co-design with students, open safety tooling, auditable frameworks, global hotline coordination, clear moral positioning on synthetic abuse, and privacy-preserving age assurance methods.

Despite these clear goals appearing across the board, significant gaps still remain in online child safety. Foundation AI models must integrate child-protection constraints upstream. Deepfake abuse needs effective international enforcement. Clear metrics are required to help advance standardized age assurance methods. Independent audits must be made to prove safety claims of AI companies. The core principle should be safety by design, ultimately preserving the dignity and autonomy of young users.


Mónika Mercz is a Childhood and AI Lab Research Fellow at AIChildSafety.org. She was previously a research fellow at George Mason University, and a visiting researcher at The George Washington University in Washington D.C., where her research centered around how AI can be used to protect children online. She is currently completing her PhD studies in Law and Political Sciences at the Doctoral School of the Károli Gáspár University of the Reformed Church in Hungary, where her research topic examines how constitutional identity manifests in essential state functions of the Member States of the European Union. A graduate of the University of Miskolc with a degree in law, she specialized as an English legal translator and holds a degree in AI and Law from the University of Lisbon. She is a founding editor of Constitutional Discourse, leading the Privacy & Data Protection column.

Constitutional Discourse
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.