Privacy Policy
Code of Ethics

Children’s online privacy to the forefront!

We often talk on this blog about the future of the EU, the age of AI and the way to guard our personal data. The meeting point of all of these discourses is that we want to ensure a safer future, a surefire way forward. But what better way is there going forward than protecting the guarantee of our future: our children?

AI is a powerful tool that can be used in the field of child protection. However, it also poses certain dangers, in particular when it comes to the privacy concerns around children’s data used to train AI systems. Personal information of youngsters must be protected to the highest level possible, as they are unable to make informed decisions due to their age and limited life experience.

There are several international agreements which govern issues of protecting children’s data. Art. 8 of the GDPR in particular, creates certain age ranges within which different types of consent is necessary to process an underage person’s personal data. The processing of the personal data of a child is lawful where the child is at least 16 years old, but Member States of the European Union may provide by law for a lower age for those purposes if such lower age is not below 13 years. A key point in this framework is that processing a child’s personal information is only lawful if and to the extent that consent is given or authorized by the holder of parental responsibility.

In the U.S. the Children’s Online Privacy Protection Act (COPPA) is the piece of legislation, which governs the collection and use of children’s data under the age of 13. The Federal Trade Commission released a notice of proposed rulemaking. The proposed changes included in this document are particularly important to consider in the context of the expanded use of AI in products and services that are child-directed. This new proposal would further limit the ability of companies to condition access to services on monetizing children’s data. There are suggestions such as requiring separate opt-in for targeted advertising, prohibition against conditioning a child’s participation on collection of personal information, limits on nudging kids to stay online and several other limitations on data retention.

There are also measures in the EU that aim to ban AI systems that pose a risk to the safety of children and attempt to save their physical, mental and emotional well-being while they are online.

These new developments are crucial, and will prove to be highly influential in time. The aim is to have one goal in front of us: bringing children’s online privacy to the forefront of our legislative efforts.

Mónika Mercz, JD, is specialized in English legal translation, Junior Researcher at the Public Law Center of Mathias Corvinus Collegium Foundation in Budapest while completing a PhD in Law and Political Sciences at the Károli Gáspár University of the Reformed Church in Budapest, Hungary. Mónika’s past and present research focuses on constitutional identity in EU Member States, with specific focus on essential state functions, data protection aspects of DNA testing, environment protection, children’s rights and Artificial Intelligence.

Print Friendly, PDF & Email