Submission
Privacy Policy
Code of Ethics
Newsletter

AI for children

How do we ensure that AI is used for good?

This is the question that has been debated extensively for the last few years now, especially after we have received news that chips are now being implanted to humans, creating a high level of anxiety in our society.

The answer I am able to provide at this moment in time is that AI can be used to gain the best outcomes possible if we use it for the purpose of protecting one of the most vulnerable groups: children.

But we are not fighting the battle for children alone: we have also been helped by international conventions such as the Convention on the Rights of the Child and various EU initiatives such as the Guidelines of the Committee of Ministers of the Council of Europe on child-friendly justice. But now we have a new helper on this fight against the abuse of our youth from an unexpected place: Artificial Intelligence.

In our increasingly globalized world, crime requires the rapid processing of large amounts of information. If a law enforcement agency found tens of millions of files on a suspect’s computer and had to go through all of them without special tools to see if they contained child sexual exploitation content, it would be extremely time-consuming and have a negative impact on the well-being of investigators.  Therefore, an artificial intelligence-based system combined with other specialized tools can be used to sort through this data and identify images and videos depicting child sexual abuse (with the appropriate privacy issues dealt with).

But this is not the only kind of work that AI makes easier: the Sweetie 2.0 chatbot (a project of Terre des Hommes) is specifically designed to identify pedophiles who prey on children. This chatbot can be used globally to combat sexual crimes against children via webcam. The new model was created based on the experience, work instructions and chat logs of the “Sweetie 1.0” project, which was carried out in 2013, to best simulate a fictitious 10-11 year old girl. An important aspect of the development of the model was to have a strategy to determine if the chat partner had bad intentions during the conversation. Various artificial intelligence techniques, such as image analysis, object recognition, text analysis and content generation, provide essential support for law enforcement efforts. Artificial Intelligence-based NLP tools analyze communication logs, often hours of chat, to identify patterns of child sexual abuse and potential threats to children. By automatically extracting relevant information from text, investigators can quickly identify suspicious activity, saving significant time compared to manual review. This application is key to detecting the early stages of child exploitation. In the international arena, AI-based content generation tools are already playing a role in covert operations, as text generators, image generators and voice synthesis technologies can create realistic profiles and interactions, helping investigators to infiltrate online spaces where children are being exploited. This capability increases the effectiveness of covert operations and provides law enforcement with valuable insight into potential threats against children.

There are many more tools developed for similar purposes, the use of which is now essential as the increased amount of child sexual exploitation and abuse material has overwhelmed law enforcement agencies worldwide. This increase is illustrated by the staggering increase in the number of reports received by the US National Center for Missing and Exploited Children (NCMEC), which received over 32 million reports in 2022, in stark contrast to the approximately 100,000 reports received in 2010.

To address this crisis, the United Nations Interregional Crime and Justice Research Institute (UNICRI), through its Centre for Artificial Intelligence and Robotics, and the Ministry of Interior of the United Arab Emirates (UAE) launched the groundbreaking AI for Safer Children initiative. It will facilitate access to 81 different AI tools for law enforcement officials in 106 countries. The initiative aims to support law enforcement agencies in realizing the potential of AI and has designed the AI for Safer Children Global Hub – a unique centralized and secure platform for law enforcement agencies. The Global Hub contains extensive information on AI tools that can be used to combat child sexual exploitation and abuse, and provides guidance on how law enforcement agencies can use them in their work in an ethical and human rights-compliant manner.

The AI for Safer Children initiative goes beyond facilitating access to AI tools; it actively contributes to building the capacity of law enforcement agencies through specific training programmes. These tailored training courses, launched in May 2023, have provided significant assistance to more than 400 investigators from 20 jurisdictions in 20 countries, including Singapore, Ukraine, the United Arab Emirates, the United Kingdom and the Caribbean. By imparting basic skills and knowledge, these sessions enable law enforcement professionals to skillfully integrate AI into investigative workflows, ensuring the effective use and responsible application of cutting-edge technology in the fight against crimes against children.

What’s more, this initiative is a testament to what can be achieved when innovation and humanity’s noblest aspirations come together. Unlike generic AI platforms such as AskBing or ChatGPT, Ask Save the Children is uniquely tailored to the field of child protection. It is based on specific research in this area and, when completed, will be able to provide immediate answers to questions about children’s rights and different life situations.

Artificial intelligence tools used to date have significantly reduced the criminal backlog from over 1.5 years to 4-6 months in the North Florida Internet Child Crime Task Force.

However, the case for using AI doesn’t stop there: in India, 174 children go missing every day, and a GUI application in Python has been developed to help find them, which police can use to open new cases. The image of a given missing person is processed by the back end and the relevant information is captured. These are stored in a database along with additional details such as name, parent’s name, age. However, in addition to the police use for the purpose of the investigation, an Android application is also being developed and will be made available to the general public. The app will use a machine learning algorithm to compare user-submitted photos with those uploaded by the police to find missing persons as efficiently as possible. Any matches can be displayed, as well as the last location of the missing person.

With these new possibilities open to us, we can safely say that AI can be governed and used ethically. Now, we just have the daunting task of making sure all legal protections are in place and that cross-border cooperation is possible… which might be the biggest challenge in the use of AI for the benefit of humanity.


Mónika Mercz, JD, is specialized in English legal translation, Junior Researcher at the Public Law Center of Mathias Corvinus Collegium Foundation in Budapest while completing a PhD in Law and Political Sciences at the Károli Gáspár University of the Reformed Church in Budapest, Hungary. Mónika’s past and present research focuses on constitutional identity in EU Member States, with specific focus on essential state functions, data protection aspects of DNA testing, environment protection, children’s rights and Artificial Intelligence.

Print Friendly, PDF & Email