France v. Google: The Latest Regulatory Action to Protect News Media from AI
France’s competition authority has levied a substantial fine of €250 million against Google, a subsidiary of Alphabet Inc. This penalty arises from Google’s non-compliance with the European Union’s intellectual property regulations in its dealings with news publishers. Central to the authority’s concerns is Google’s artificial intelligence (AI) service, particularly its AI-driven chatbot originally known as Bard, now rebranded as Gemini, which allegedly utilized content from publishers and news agencies for its development without appropriate notification.
The French competition authority stated, that Google had not adequately informed news publishers that it was utilizing their content to enhance its artificial intelligence algorithms. This decision is part of a broader ruling that found Google at fault for its negotiation tactics with media entities. The authority imposed a €250 million fine on Google for failing to secure equitable licensing agreements with media companies for the use of their article links in search results. Additionally, the authority criticized Google for training its AI chatbot with news articles without prior notification to the media companies or providing them a mechanism to opt out until September of the previous year.
Although the legal intricacies regarding the fair use of news content for AI training remain unresolved, French regulators have determined that Google breached a previous agreement with the government by not disclosing the use of publisher content for its AI chatbot. French authorities have consistently supported local publishers in their argument that large tech companies, including Google, have profited from their content without offering fair remuneration. Following a €500 million fine in 2022, Google was mandated to negotiate licensing agreements with French publishers, but the regulator also found that Google did not engage in these negotiations in good faith, accusing the company of withholding critical information from a mediator overseeing the negotiation process and relying on “opaque” data to calculate payments to publishers. This, according to the authorities, did not fully consider the various ways Google profits from media-produced content.
This development is the latest in a series of ongoing disputes between Google and publishers over compensation for displaying news content in search results and other Google services. Similar challenges have been faced by Meta, the parent company of Facebook and Instagram, in Australia and Canada, as governments seek to ensure publishers are compensated by these tech giants.
The scenario exacerbates an already challenging environment for news outlets, which have seen their traditional revenue streams erode with the rise of digital media. Advertising revenues have shifted from print and broadcast media to online platforms, and readers have become accustomed to free access to news content, making it difficult for outlets to sustain operations through subscriptions and sales alone. The unauthorized use of their content by AI systems could further diminish their ability to generate revenue, leading to reduced resources for investigative journalism and news coverage, and ultimately, a decrease in the diversity and quality of information available to the public.
For news organizations to survive and thrive in the age of AI, there must be mechanisms to ensure that they are fairly compensated for the content they produce. This could include licensing agreements, revenue-sharing models with AI service providers, or legislative actions to protect the intellectual property of news content. Additionally, AI developers and news outlets could collaborate to ensure that AI systems direct users back to original news sources, thereby supporting traffic to news sites and encouraging financial support through subscriptions and advertising.
While AI chatbots represent a significant innovation in information technology, their potential to use news content without compensation poses a serious threat to the viability of news organizations. Addressing this challenge is crucial not only for the survival of these entities but also for the preservation of a diverse, reliable, and robust news ecosystem that is fundamental to informed public discourse and democracy. As we navigate this new digital frontier, it is imperative that the value of journalistic work is recognized and protected in the evolving media landscape.
János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”