Just unleash your art, we will guarantee your safety! – generative AI’s data protection concerns
The use of generative AI has brought with itself several challenges, mixed with possible advancements. This type of artificial intelligence can produce various sorts of content, including text, imagery, audio and synthetic data. Some are concerned about intellectual property issues and the varying possible impact on human creativity. Artists can use this tool and win prizes, but there is also a growing fear that AI may take over the jobs of human artists, effectively replacing our rich forms of self-expression. Generative AI can better movie dubbing and educational content, while also creating problems related to deepfakes.
It starts with a prompt given by a human user, that could be in the form of a text, an image or any input that the AI system can process, then various AI algorithms return new content in response to the prompt. This content is often hard to distinguish from art, music or other products made by humans.
All of these advancements in technology create problems which we must face. This requires new developments in the field of AI regulation, which is why the EU AI Act was finalized, and in the United States alone in 2023, seven new state privacy laws were enacted, each of which will impact AI development. A crucial piece of generative AI’s regulatory framework that needs more attention is undoubtedly data protection. But isn’t the GDPR enough? Isn’t that a model for other legislation, capable of keeping up with the contemporary challenges? As recent lawsuits have shown, the problems of using large-language models and training them on data are just arising.
As AI models are often trained on massive datasets of public data, it is difficult to know whether a specific AI system had access to anyone’s personal information. Therefore, a stronger form of transparency will be needed, and existing privacy laws shall evolve to account for new circumstances under which personal data might be collected and processed.
Harmonization of existing data protection regulations with the newly made AI regulations is also of vital importance. We must consider the practical application of new AI policy initiatives.
Finally, protecting children’s privacy – and children with the use of AI in general – is of crucial importance, especially when it comes to generative AI content. Deepfakes, voice cloning and several other forms of this technology can affect children’s mental health as well as physical and emotional safety in the long run. It is their privacy, which we must consider extremely carefully going forward. As they grow up in an online world, surrounded by AI technology, there should be specific regulation tailored to their needs. The Children’s Online Privacy Protection Act (COPPA) governs the collection and use of children’s (under the age of 13) data in the U.S., but it needs revisions, similarly to many other pieces of legislation around the world.
So what can countries and companies do in order to ensure that these necessary steps to strengthen data protection and privacy are taken as we move forward towards the age of AI? What additional precautions does generative AI need compared to other types of AI regulation?
First of all, we must put privacy concerns to the forefront of discussions when it comes to this type of modern technology. On the level of companies, it is crucial to develop clear internal privacy principles and policies, as well as to incorporate these principles into all levels of product development and deployment. As states only have weapons such as producing a regulatory framework, while huge companies, often led by extremely powerful people are capable of producing potentially dangerous AI systems, a dialogue between the public sector and the private sector is inescapable going forward. This applies to all areas of the world, but in particular, there will be huge debates, lawsuits and clashes of interests when it comes to EU regulations and companies from the US.
There are already quite a few attempts to create a possible standard of responsible AI, and all guidelines are important in this time of uncertainty. Going forward, I believe that data protection impact assessments (DPIAs) should be conducted when it comes to training AI systems and other steps enhancing accountability and transparency of companies should be taken. This can involve using Google’s Secure AI Framework (SAIF), strengthening compliance and increasing transparency across teams.
A few key ways to lower the risks associated with AI, it is recommended that companies educate the executive leadership team on the potential risks of generative AI, identify and prioritize a set of use cases, lay out a clear technology strategy, determine sources of competitive advantage and proactively engage your ecosystem of advisors and partners.
When gathering and feeding data including personal data to an AI for the purposes of its training, the entity operating this training must ensure the transparency of said data processing. When it comes to generative AI’s data protection risks, a few ways it can be overcome are: identifying personal data among the data automatically retrieved by the AI, directly identifying each individual data subject, and obtaining sufficient contact information to inform each data subject of the processing of their data.
These steps and the ideas I have mentioned all require a strong commitment both from states and companies to work together and mitigate the risks of AI, so that our future can be secure and thriving. I firmly believe that AI, and in particular generative AI, is a powerful tool that can be used to serve humanity, but we must put issues of governance first. The data protection side of things might just be the key to ensuring that generative AI can be used in peace.
Mónika Mercz, JD, is specialized in English legal translation, Junior Researcher at the Public Law Center of Mathias Corvinus Collegium Foundation in Budapest while completing a PhD in Law and Political Sciences at the Károli Gáspár University of the Reformed Church in Budapest, Hungary. Mónika’s past and present research focuses on constitutional identity in EU Member States, with specific focus on essential state functions, data protection aspects of DNA testing, environment protection, children’s rights and Artificial Intelligence.