Deepfakes and Freedom of Expression—Different Approaches
The rise of deepfakes brings up important questions about the relationship between freedom of expression and the need to protect people from harmful consequences. On the one hand, freedom of expression is a fundamental human right that allows people to share their ideas and opinions without fear of censorship with the help of any technology, including deepfakes. On the other hand, deepfakes also have the potential to be used for malicious purposes and can be used to spread false information and damage people’s reputations in a much faster and much more efficient way than before. There is no easy answer to the question of how to balance these competing interests. Some people argue that deepfakes should be banned outright, while others claim that they should be protected under freedom of expression laws. There is also a middle-ground approach that would allow deepfakes to be used for certain purposes, such as entertainment or education, but would restrict their use for malicious purposes.
The supporters of an outright ban argue that the rise of deepfakes poses a significant threat to democracy and individual rights. This means that deepfakes challenge the traditional position of civil libertarians on harmful speech. They argue that the three well-known civil libertarian claims about free speech cannot be sustained in the digital age. The first claim is that an unlimited “marketplace of ideas” ultimately leads to the discovery of truth. The second claim is that harmful speech is always best addressed through counter-speech rather than regulation. The third claim is that even well-intentioned and modest regulations of speech will ultimately be used to silence the minority. These arguments do not stand up, because unlimited free speech rights, especially in an era of technologically mediated expression, have led to the disintegration of truth, the reign of unanswerable speech, and the silencing and self-censorship of minorities. Another problem is the phenomenon of confirmation bias, which is the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories. This reinforcement bias has long been known in the cognitive sciences and has been the subject of much research, including on the formation of opinion bubbles. Studies show that people are often unable to distinguish between fact and opinion and that even when presented with facts that contradict their beliefs, they are unlikely to change their minds. Consequently, a new approach is needed that considers the harm that speech can cause and the need to protect vulnerable groups from the effects of harmful speech. This principle should be based on dignity, equality, and democracy, and should prioritize the protection of individual rights and the promotion of social justice.
Others are less definite. They point out that measures to tackle deepfake harms do not raise any new and unfamiliar freedom-of-expression challenges. Deepfakes are nothing but synthetic media that use machine learning to manipulate or generate visual or audio content. There are two possible answers to the question of whether suppressing deepfakes would violate freedom of expression norms: the “kneejerk” approach, according to which deepfakes should be suppressed at all costs, suggests that deepfakes should not be protected under freedom of expression principles because they are definitionally fake and therefore completely unwanted. This answer argues that deepfakes should be targeted by whatever measures are needed to deal with the harm they can produce. The other approach is the “dual-policy”, which means to suppress deepfakes only in certain contexts. Overt cases, where there is no attempt to hide the fakery, would mostly deserve protection under freedom of expression principles, while covert cases would not. They believe that both answers are inadequate because they fail to consider the complexity of the issue. Deepfake harms do not raise any new and unfamiliar freedom-of-expression challenges, because the harms associated with deepfakes are also not new and unfamiliar. Instead, they are like the harms associated with other forms of speech, such as defamation and fraud. So existing legal and regulatory frameworks can be used to address these harms without violating freedom of expression norms. Measures to tackle deepfake harms are necessary, and they must be carefully crafted to avoid infringing on freedom of expression. Therefore, the best solution is a nuanced approach that balances the need to protect individuals and society from harm with the need to protect freedom of expression.
Deepfakes and the First Amendment of the US Constitution
If we talk about deepfakes with political content, there are two main categories to divide them into: those that are made for comedic purposes but have negative consequences, and those that are designed to spread misinformation and lies. Deepfakes falling under the former category can be viewed as a form of political speech and therefore protected under the First Amendment. If a deepfake is intended to tease a political candidate or government official, it could be argued that it is a form of political speech and parody. The problem is that deepfakes are so convincing that they do not appear to be parodies. For a parody to be considered a parody, the audience must recognize both the subject of the parody and the parodist’s mocking distortions. The easiest and most unambiguous way is to have the video labeled or watermarked as a deepfake. The latter category is more problematic because the potential for deepfakes to be used as a tool to spread political misinformation is very significant. Deepfakes could be used to spin elections and increase the dissemination of “fake news.” They can be used as a political weapon, with politicians on both sides of the aisle expressing concern about their use. In this sense, deepfake can be considered as a type of weaponized AI that is suitable to undermine trust in or discredit public actors. Diplomats and ambassadors have also claimed to be the target of deepfakes. Another burning issue is the “Liar’s Dividend,” a phenomenon in which public figures may start claiming their missteps were fake news publicized through a deepfake, rather than a truthful statement.
Deepfakes in a European perspective
Putting the question in a European perspective, according to the European Convention on Human Rights Article 8, the right to privacy, and Article 10, the right to freedom of expression collides. The European Court of Human Rights has ruled that Article 8 also includes the right to the protection of one’s honor and reputation. On the other hand, Article 10 must be understood very extensively and encompasses shocking and offending manifestations, including personal opinions and satire (e.g., publishing a fictitious interview). When deepfakes are published, it is not possible to predict which of these rights would dominate because the Court offers few general guidelines and analyzes each case on its own merits while taking the relevant facts into account. As a result, the framework of the freedom of expression is flexible enough to address problematic deepfakes. According to the case law of the European Convention on Human Rights, public figures can invoke their right to privacy to protect their name, honor, and reputation, even if they themselves wish to be the center of attention. However, the European Court of Human Rights has also ruled that public figures must tolerate intrusion into their private lives to a greater extent than ordinary citizens and must accept expressions of ridicule. This is a well-established and widespread principle. In the case of deepfakes, however, it would be useful to define which types of deepfakes are generally considered legitimate and which are not. This would provide legal certainty both for citizens wishing to use satire and for public actors. Besides the fact that satirical presentations and ridicule of public figures have been around for a long time, deepfakes, due to their realistic nature, may have a greater impact than anything before.
 Mary Anne Franks (2019): Sex, Lies, and Videotape: Deep Fakes and Free Speech Delusions. In Maryland Law Review. Volume 78, Issue 4. p. 892. https://repository.law.miami.edu/fac_articles/792/
 Ibid., p. 893
 Ibid., p. 897
 Ibid., p. 896
 Ibid., p. 6
 Ibid., p. 7
 Ibid., p. 12
 Lindsey Wilkerson (2021): Still Waters Run Deep(fakes): The Rising Concerns of “Deepfake” Technology and Its Influence on Democracy and the First Amendment. In Missouri Law Review. Volume 86, Issue 1. p. 425. https://scholarship.law.missouri.edu/mlr/vol86/iss1/12/
 Ibid., p. 426
 Ibid., p. 428
 Kaylyn Jackson Schiff – Daniel Schiff – Natália S. Bueno (2023): The Liar’s Dividend: How Deepfakes and Fake News Affect Politician Support and Trust in Media. In OSF. p. 4. https://doi.org/10.31235/osf.io/x43ph
 Bart van der Sloot – Yvette Wagensveld (2022): Deepfakes: regulatory challenges for the synthetic society. In Computer Law & Security Review. Volume 46, September 2022. p. 10. https://doi.org/10.1016/j.clsr.2022.105716
 Ibid., p. 11
Gellért MAGONY is a student at the Faculty of Law and Political Sciences of the University of Szeged and a scholarship student of the Aurum Foundation. His main area of interest is the relationship between the digital world and law. His previous research has focused on the relationship between social networking sites and freedom of expression and the statehood of metaverses. He is currently researching social influence through deepfakes.