Privacy Policy
Code of Ethics

How Google Can Destroy Today’s News Media, with Just a Click.

Just recently, Google announced that it will infuse its ubiquitous search engine with its powerful artificial intelligence model, Gemini. This technology is designed to directly answer user queries at the top of results pages, potentially diminishing the need for users to click on external links to gather information. While this might seem convenient for users, it poses a significant threat to news publishers, who are already grappling with declining traffic and revenue by fearing that the new AI-infused search experience will further reduce their audience, starving them of readers and advertising revenue.

The digital revolution has already posed significant challenges for traditional journalism. Now, the advent of AI threatens to exacerbate these challenges, particularly for small journalism outlets. Google’s recent announcement of integrating its Gemini AI model into its search engine is a stark reminder of the precarious position in which many of these outlets find themselves. This development could significantly reduce the traffic to news websites, thereby diminishing their revenue and potentially driving many out of business.

The integration of AI into search engines marks a significant shift in how information is accessed and consumed. Google’s Gemini AI, designed to provide summarized answers to user queries at the top of search results, poses a severe threat to news publishers. By providing immediate answers, Google reduces the need for users to click through to news websites, leading to a significant drop in traffic. For small journalism outlets, which rely heavily on web traffic for advertising revenue, this could be catastrophic. Modern audiences have become accustomed to instant gratification. They want quick, digestible information rather than lengthy, in-depth articles. AI-driven summaries cater to this demand, further eroding the market for comprehensive journalism. Small outlets, which often lack the resources to compete with the concise, immediate responses generated by AI, are particularly vulnerable. The economic impact of this shift is profound. The primary revenue stream for many small journalism outlets is advertising. With reduced traffic due to AI-generated summaries, these outlets will see a corresponding drop in ad revenue. Unlike larger media organizations that might have diversified income streams, small outlets often rely heavily on advertising, making them particularly susceptible to these changes.

As seen with the advent of the internet, free access to news led to a steep decline in print circulation and advertising revenues. Similar risks are present with AI, particularly as search engines and other AI tools may provide information directly to users without directing traffic to news websites. This phenomenon could drastically reduce web traffic, further undermining the financial viability of news organizations. In other words, AI could further disrupt the already ailing business models of local news outlets, exacerbate the loss of web traffic due to zero-click searches, and introduce errors into news stories, undermining credibility. There is also the fear that AI could replace human journalists, leading to job losses and a decline in journalistic quality. Ethical concerns arise from the lack of clear guidelines for AI use, and smaller news outlets may lack the resources to effectively implement AI technologies. Additionally, AI’s propensity for errors and its potential misuse could accelerate the spread of misinformation and disinformation. While there is considerable enthusiasm for AI’s potential to transform and introduce efficiency into communicative processes, this technocentric vision of communication is fraught with risks and challenges, primarily due to a lack of transparency from socio-legal and scientific-computing perspectives.

The dominance of a few tech giants in the AI space also poses a threat to the independence and diversity of media. These companies’ control over AI technologies and their integration into media processes could lead to a homogenization of news and a concentration of power that undermines the plurality of voices essential for a healthy democracy. AI has become another way for powerful tech corporations to extend and entrench their dominant market positions, and this could make it difficult, if not impossible, for sectors like journalism to remain independent and maintain a public interest orientation. AI’s potential to disrupt journalism mirrors the impact of the internet, which decimated traditional revenue streams and made news organizations reliant on social media platforms they do not control. The dominance of tech giants in digital advertising, publishing, and search has already undermined the financial stability of journalism, and AI threatens to exacerbate these issues. This technological shift has multiplied intermediaries and changed the media environment, necessitating innovative business models to ensure sustainability in a scenario where AI affects the processes, practices, and results of new companies.

Trust and accountability are central to journalism, but AI-generated content lacks accountability. While human journalists can be held responsible for their reporting, AI operates without ethical considerations. The erosion of trust in media is a significant concern, and the rise of AI could exacerbate this issue. Small journalism outlets, which often have strong ties to their readership, play a crucial role in maintaining trust. Their potential demise could lead to a further decline in the public’s trust in news.

Competing with AI requires significant investment in technology and personnel, resources that small journalism outlets typically lack. The cost of developing or licensing AI technologies to keep up with larger competitors can be prohibitive, further widening the gap between large and small media organizations. This financial strain can lead to the closure of small outlets, reducing the diversity of voices in the media landscape. Beyond the economic impact, the philosophical and ethical dimensions of AI in journalism are also significant. Journalism is not merely about reporting facts; it involves analysis, context, and storytelling. AI, despite its capabilities, lacks the human touch that is crucial for nuanced journalism. Small outlets, often deeply rooted in their communities, provide perspectives and insights that are unique and valuable.

Small journalism outlets are often local, providing essential coverage of regional issues that larger organizations might overlook. The decline of these outlets means that many local stories will go untold, leading to a less informed public. This has significant implications for democracy, as local journalism is crucial for holding local authorities accountable and ensuring that citizens are informed about issues that directly affect them. The potential extinction of small journalism outlets could lead to an even greater concentration of media power in the hands of a few large corporations. This centralization can limit the diversity of perspectives and reduce the plurality of voices in the public sphere, which is detrimental to a healthy, functioning democracy. The homogenization of news and the reduction in diverse viewpoints could lead to a less informed and more polarized society.

To mitigate the impact of AI on small journalism outlets, it is essential to develop support mechanisms. This could include public funding for local journalism, grants, and subsidies to help small outlets invest in technology and innovation. Additionally, initiatives that promote media literacy and encourage audiences to support independent journalism can play a crucial role. Supporting small journalism outlets through these means can help preserve the rich diversity of voices and perspectives essential for a healthy, informed society. Without access to high-quality, human-created content, the foundational models that fuel AI applications will degrade, potentially collapsing the entire system. Yet, AI’s reliance on news content raises concerns about intellectual property rights and fair compensation. News media bargaining codes, which are being adopted or considered in various jurisdictions, could require tech platforms to negotiate with news publishers and ensure fair compensation for the use of their content in AI systems.

The sustainability of journalism in the AI era will depend on the industry’s ability to adapt its business models and assert its pricing autonomy. News outlets must optimize revenue streams and develop sophisticated compensation frameworks for the use of their content in AI applications. They also need access to information about how their content is used in AI systems and foundational model weights. Government regulations will be crucial in enabling news organizations to negotiate fair deals and protect their intellectual property.

The development of AI technologies should be guided by ethical considerations that take into account the impact on journalism. This includes developing algorithms that prioritize diverse and high-quality content rather than merely the most convenient summaries. Collaboration between tech companies and journalism organizations could help create AI solutions that support rather than undermine the industry. Ethical AI development is crucial to ensure that technology complements rather than competes with human creativity and integrity. Regulation can also play a role in protecting small journalism outlets. Legislation that ensures fair compensation for the use of journalistic content by AI and search engines can help maintain revenue streams for news publishers. Additionally, antitrust measures to prevent the monopolization of media by a few large corporations are essential. Legislative measures that promote a fair and competitive media landscape can help safeguard the future of journalism.

János Tamás Papp JD, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary, and a legal expert at the Department of Online Platforms of the National Media and Infocommunications Authority of Hungary. He has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”

Print Friendly, PDF & Email