Submission
Privacy Policy
Code of Ethics
Newsletter

Synthetic Media and Politics: Deepfakes, Misinformation, and the Future of Democracy (Part I.)

As the use of deepfake technology grows, concerns over its impact on election integrity are escalating. Deepfakes, which are digitally manipulated videos or recordings that can convincingly alter reality, have become an increasingly dangerous tool for spreading disinformation. With elections relying heavily on the public’s ability to trust information, deepfakes pose a unique threat by distorting facts, confusing voters, and undermining the democratic process.

Deepfakes, a form of synthetic media where “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said,” have rapidly become a growing concern in many areas of society. One of the most worrying potential uses of deepfakes is their ability to disrupt democratic processes, including elections. In the United States, where the political climate can be considered deeply polarized, the rise of deepfake technology raises pretty serious concerns about misinformation, voter manipulation, and the integrity of the electing process.

In recent years, we have seen a steady increase in deepfake content that targets political figures. While some of these attempts were easily debunked, they nonetheless highlighted the threat that deepfakes pose. In one notable case, a deepfake video of Nancy Pelosi, the Speaker of the House of Representatives, was circulated online, edited to make her appear intoxicated and incoherent. But there were also deepfakes about Barack Obama and Donald Trump as well. While these particular videos did not involve sophisticated AI techniques, they demonstrated how video manipulation can be used to shape public perception of political figures in harmful ways. These types of videos are often difficult for the average person to discern as fake, particularly when they play into existing biases or suspicions about a particular candidate or political figure.

In the 2024 election cycle, deepfakes are likely to play an even more prominent role. Political campaigns are now preparing for the possibility that manipulated videos or audio may be used to smear candidates or mislead voters. In this year’s campaign, deepfake technology has been used to target both Vice President Kamala Harris and former President Donald Trump. One instance involved a deepfake video of Harris shared by Elon Musk, which falsely portrayed her making derogatory remarks about being a “diversity hire” and critiquing her border policies. On the Republican side, the anti-Trump political group, The Lincoln Project made a deepfake of Donald Trump’s late father, Fred Trump, who died in 1999. In the video, the AI-generated Fred harshly criticizes his son, calling him a failure and expressing shame in his legacy, delivering stinging remarks such as, “I’m ashamed you have my name,” and mocking Trump’s inability to profit from ventures like casinos.

What makes deepfakes even more concerning in the context of elections is their potential to rapidly spread across social media platforms. With the ability to go viral in a matter of hours, deepfakes can significantly impact public opinion before they are fact-checked or removed. Social media companies often struggle to keep up with the sheer volume of content posted every second, and harmful videos can easily reach millions of viewers before any corrective action is taken. This delay, however brief, may have lasting effects on how voters perceive a candidate or party, especially in the final days leading up to an election when emotions and stakes are high. Adding to the complexity of the issue is the fact that deepfakes are becoming increasingly harder to detect. AI technology is advancing rapidly, and the most sophisticated deepfakes are now nearly indistinguishable from genuine video or audio. This puts an enormous strain on both human and automated efforts to identify and flag manipulated content. Companies like Facebook and YouTube have implemented measures to detect deepfakes, but their tools are far from perfect. Even with these measures in place, the speed at which misinformation can spread makes it a serious threat to election integrity. As we look toward upcoming elections anywhere, the challenges posed by deepfakes will only grow more urgent, because the ability to shape narratives through synthetic media has the potential to destabilize political processes and sow division among voters. While the EU has taken concrete steps to address the issue, the U.S. is still grappling with how to respond.

The European Commission’s guidelines for online platforms provide clear expectations for how tech companies should handle issues related to misinformation and manipulated media. These guidelines, published as part of the new regime of the Digital Services Act, aim to create a safer and more transparent online environment for users across Europe. The document stresses the importance of early detection of disinformation, including deepfakes, and emphasizes the responsibility of platforms to inform users when they have interacted with misleading or harmful content. One of the most notable aspects of the EU guidelines is the obligation for online platforms to label content that is likely to be manipulated or deceptive. This stands in stark contrast to the U.S., where platforms often rely on voluntary measures, and regulation is more fragmented. The EU has also outlined clear penalties for non-compliance, with fines for platforms that fail to take adequate steps to limit the spread of harmful content.

This regulatory approach is intended to hold online platforms accountable in a way that ensures greater transparency and safeguards the democratic process from being undermined by technological manipulation. The U.S., by comparison, lacks a cohesive regulatory framework for dealing with deepfakes. While there have been some efforts at the state level to address the issue, such as California’s law making it illegal to distribute deepfake videos within 60 days of an election, these laws vary widely and are not universally applied. This leaves considerable gaps in how the U.S. handles the threat of deepfakes, especially on a national scale. Without a unified approach, the potential for deepfakes to influence elections remains a significant concern. Also, there is a strong tradition of free speech in the U.S., which complicates the regulation of online content. Many argue that any attempt to limit speech, even if it is false or misleading, could be seen as a violation of First Amendment rights.

If the state intends to take action on this issue, it faces significant challenges, as demonstrated by the fate of California’s deepfakes law. The law, which aimed to regulate the use of deepfake technology, was largely struck down by a federal judge who found it unconstitutional for being too broad in its restrictions on free speech. Senior Judge John A. Mendez ruled that the law overly restricted expression, labeling it a blunt tool that unjustly limits humorous content and stifles the free exchange of ideas. Only a small part of the law, requiring verbal disclosure in audio-only deepfakes, was upheld. The First Amendment makes it exceptionally challenging to craft laws that can regulate the misuse of deepfakes without infringing on free speech rights. Courts have consistently shown reluctance to uphold laws that might suppress legitimate political or humorous expression, even if those laws are intended to combat harmful disinformation. While states may seek to curb the spread of deepfakes, particularly those that could interfere with elections or public discourse, the strong protections of the First Amendment make it difficult to regulate without overstepping constitutional boundaries. On the other hand, the lack of clear rules regarding deepfakes leaves a wide opening for malicious actors to exploit the technology in ways that could have serious consequences for democracy. The U.S. must balance these concerns as it considers how to address deepfakes in the future.

In the next part, we will explore how the rise of outright lies in political discourse has not only reshaped the understanding of misinformation but also paved the way for the increasing use of deepfakes as a tool to manipulate public perception and how this evolution, coupled with the spread of deepfakes makes it even more difficult for the public to distinguish between truth and deception, and what this means for the future of democratic accountability.


János Tamás Papp, PhD is an assistant professor at Pázmány Péter Catholic University, Hungary and a research expert at the National Media and Infocommunications Authority of Hungary. He earned his JD and PhD in Law at the Faculty of Law and Political Sciences of the Pázmány Péter Catholic University where he has taught civil and constitutional law since 2015 and became a founding member of the Media Law Research Group of the Department of Private Law. His main research fields are freedom of speech, media law, and issues related to freedom of expression on online platforms. He has a number of publications regarding social media and the law, including a book titled „Regulation of Social Media Platforms in Protection of Democratic Discourses”

Print Friendly, PDF & Email