Submission
Privacy Policy
Code of Ethics
Newsletter

The Death of the Photograph: The Collapse of Photographic Truth in the AI Era (Opinion)

As AI technology becomes increasingly integrated into everyday tools like smartphone cameras, the distinction between genuine and fabricated images is becoming harder to discern. This shift, exemplified by features in devices such as the Google Pixel 9, raises important questions about the future of photography and the erosion of public trust. With AI-generated photos now so convincing, our ability to trust visual evidence is at risk, prompting a reevaluation of how we perceive reality in a world where manipulation is easier than ever.

AI-enhanced photo editing, which allows users to modify images with unprecedented ease and realism, poses a significant risk to the integrity of visual evidence in legal proceedings. Historically, photographs have been regarded as reliable and objective forms of evidence. However, the introduction of sophisticated AI tools, which can seamlessly alter images, blurs the line between reality and fabrication. This undermines the trust that courts and the public place in photographic evidence, potentially leading to wrongful convictions or the dismissal of crucial evidence due to doubts about its authenticity. For example, consider the implications of AI in criminal cases where photographic evidence is key. If an image can be convincingly altered to include or exclude a person, weapon, or other crucial detail, it becomes challenging to ascertain the truth. The legal principle of “beyond a reasonable doubt” may be compromised when juries and judges can no longer be certain that the images presented to them accurately reflect the reality of the situation. This risk is not hypothetical; experts have already raised concerns about the use of deepfakes and other AI-generated content in courtrooms, warning that these technologies could be used to manipulate evidence or discredit legitimate evidence.

But the potential danger extends beyond the courtroom. AI-driven photo manipulation threatens to weaken public trust in visual media more broadly, which could have a chilling effect on free speech. In an era where social media platforms are the primary venues for public discourse, the ability to discern truth from fiction becomes crucial. If people begin to doubt the authenticity of every image they see, the power of visual media as a tool for communication, protest, and social change is diminished. This skepticism could lead to a scenario where genuine images that expose wrongdoing or highlight social injustices are dismissed as fabrications, thereby stifling free speech and the ability to hold those in power accountable.

The consequences of this growing distrust are already evident in the realm of journalism. With AI tools making it easier to create convincing fake images, news organizations face an uphill battle in maintaining the credibility of their visual reporting. This is particularly concerning in contexts where photographic evidence is crucial to the story, such as in conflict zones or during protests. A report from the Brookings Institution highlights the risks of deepfake technology in eroding trust in journalism and the broader media ecosystem, emphasizing the need for new tools and strategies to verify the authenticity of visual content. Their study, “The Liar’s Dividend: Can Politicians Claim Misinformation to Evade Accountability?” explores a troubling phenomenon in contemporary political discourse known as the “liar’s dividend.” This term refers to the benefits that politicians may reap by falsely claiming that true and damaging information about them is actually fake news or deepfakes. The research investigates how these false claims can help politicians maintain or even increase their support among the public following a scandal.

The researchers conducted five survey experiments with over 15,000 American adults to test these strategies. Participants were presented with scenarios involving real political scandals and the subsequent responses of the politicians involved, which included falsely claiming that the damaging information was fake news or a deepfake. The experiments found that these strategies generally succeeded in boosting support for the politicians, particularly when the scandal was reported in text format rather than video. Text-based scandals were more susceptible to these false claims, likely because text is easier to dismiss as fabricated than video, which is often perceived as more credible and harder to fake. However, the effectiveness of the liar’s dividend diminishes when video evidence is involved. The study shows that while claims of misinformation can be effective against text-based reports, they are much less convincing when the scandal is captured on video. This suggests that video remains a relatively stronger tool for holding politicians accountable, as it is harder for them to discredit visual evidence compared to text.

Despite the apparent effectiveness of these false claims in specific scenarios, the study found that they do not necessarily lead to a broader erosion of trust in the media. While one might expect that frequent false claims of misinformation would diminish public trust in news sources, the research did not find consistent evidence of this effect. Instead, the primary impact of these claims was on the specific belief in the scandal and the support for the politician involved, rather than on the general trust in media. The research also compared the liar’s dividend strategy to other common responses to scandals, such as simply denying the scandal without alleging misinformation or apologizing for the offense. Interestingly, the study found that claiming misinformation was more effective at maintaining or increasing political support than either of these alternatives. Apologizing, while perhaps more ethically and normatively desirable, proved less effective in bolstering a politician’s standing than falsely crying “fake news.”

The study’s findings have significant implications for political accountability, media trust, and the future of democratic discourse. The researchers argue that while video evidence might currently resist the liar’s dividend more effectively than text, the ongoing advancements in AI and media manipulation technologies could change this dynamic. As deepfakes become more sophisticated and harder to detect, the potential for their misuse in political contexts could grow, further complicating the relationship between truth and public perception.

Like I previously said before, while fact-checking and media literacy efforts are important, they may not be sufficient to counteract the liar’s dividend entirely. The persistence of this phenomenon underscores the need for more robust strategies to protect the integrity of political discourse and ensure that public figures are held accountable for their actions. This study also paints a concerning picture of how the manipulation of misinformation can be used to undermine political accountability, highlighting the need for continued vigilance and innovation in combating the spread and impact of false information in the political sphere and calls for further research into how different types of scandals and media formats interact with misinformation strategies, as well as the long-term effects of these tactics on public trust and political stability.

While companies like Google and Apple assure us that their AI tools, such as those found in the Pixel 9 and iPhones, are designed with safeguards and responsible content moderation, these measures often fall short of addressing the broader implications. The increasing prevalence of AI in image generation is challenging our ability to distinguish between what is real and what is fabricated. As technology continues to evolve, the reliability of photographs—and by extension, our trust in visual evidence—faces a significant erosion, forcing us to question the very nature of reality in an AI-driven world.


János Tamás Papp, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary and a research expert at the National Media and Infocommunications Authority of Hungary. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University where he has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”.

Print Friendly, PDF & Email