The Use of Deepfakes in the Electoral Process
Deepfakes can be a threat at both individual and state levels. The former category includes non-consensual disclosure of explicit imagery, harassment, extortion, identity theft, and deepfake-based social engineering attacks, while the latter includes the spread of fake news, threats to national security, and—one of the most frightening and socially impactful uses—the manipulation of electoral processes. One can imagine countless ways in which deepfakes can be used in electoral processes, let’s look at the most likely ones.
- A credible-looking, high-quality deepfake can have an immediate impact on voters in an election by putting false words in the mouth of the candidate it features, or by portraying him or her as doing things that he or she has not actually done. This can damage a person’s reputation in the long term, even if it is later clarified that the recording was false.
- Also, a deepfake impersonating a politician or news anchor can provide voters with decision-altering false information, which can cause irreparable confusion (e.g., on election day, when there is no time left for rebuttal) and can change the outcome of an election for instance in case of an electoral district where there is a very tight contest for victory.
- Political actors themselves can use the supposed threat of deepfakes to challenge factual information that damages their reputation. They can therefore deliberately lie by misrepresenting what they know to be true as deepfake and thus avoid responsibility for the content.
- Finally, they can use the hypothetical threat of deepfakes to confuse voters. As a real-life example, during the 2020 parliamentary elections in Georgia, the ruling party claimed that the opposition would publish a deepfake video before the elections. This claim was made without any real evidence to back it up, and there is no evidence afterwards that such videos were actually published. Similarly, in May 2023, Turkish President Tayyip Erdogan‘s main political rival accused Russia of being responsible for fake material published on social media immediately before the May elections.
Examples of political use of deepfakes
The use of deepfakes is still being used with caution by major political actors around the world. The first actual use of deepfakes in a political campaign took place in 2020, in India, ahead of the legislative assembly elections in Delhi. In the video, Manoj Tiwari, a member of parliament, speaks in English, criticizes his political opponent and encourages voters to vote for his party. In the resulting deepfake video, the politician says the same thing, but in Haryanvi, which is a Hindi dialect spoken by the party’s target voters. The deepfake reached around 15 million people in 5,800 WhatsApp groups. The Indian party in question has partnered with a political communications company to create deepfakes targeting voters in more than 20 different languages used in India. The aim was therefore (for now) to reach as many voters as possible by overcoming language barriers, rather than deliberate deception.
In the United States, the Republican Party released a 30-second political ad in April 2023, consisting of AI-generated images. The video is not actually a deepfake, as it is not posed as if it were real, but instead envisages various disasters in the event of Joe Biden‘s re-election: China invading Taiwan, San Francisco being overrun by criminals, etc. The quality of the video and the fact that one of the two major US parties uses AI as part of its official communications shows that political actors are aware of the potential of AI and are prepared to use it in their campaigns (time will tell whether it is only in a fair way or not).
Deepfake video and audio recordings are particularly dangerous because they are more persuasive to voters. They are much less likely to be questioned by anyone than a simple text, such as a newspaper article or an image. After all, everyone knows that fake news and retouched pictures existed way back, decades ago. But not everyone knows how quickly, easily, and convincingly deepfake content can be produced today. As the dangers of deepfake technology become more widely known (for example, when public figures, especially politicians, who are credible to the average person, will start talking about it—this has not yet happened, at least not in Hungary), public confidence in video and audio recordings could be shaken. We start to doubt everything, we no longer believe anything we see on the internet. This can even lead to (even greater) indifference towards politics.
Fundamental rights violated
Many fundamental rights of the victim targeted by deepfake may be violated. Fakes that manipulate or misrepresent the speech or actions of the target person can undermine the victim’s human dignity, affecting the public’s perception of his or her person. Deepfakes involving the unauthorized use of the target person’s image and voice may violate the right to privacy when used to disseminate defamatory or harmful content. And, when deepfakes are used to create content that promotes hatred, discrimination, or incitement to violence, they may violate the prohibition of inhuman treatment. Deepfakes can interfere with the public’s access to reliable information, thus hampering the ability to make informed decisions in elections and freedom of information. Last but not least, the right to fair elections may also be violated, as the use of deepfakes may undermine the fairness and integrity of elections.
On the other hand, too strict regulation (or even a ban) of deepfakes could infringe freedom of expression. Restricting full content that does not affect the electoral process would inhibit artistic creativity and limit the free use of innovative means of creative expression, as deepfakes can also be used in entertaining, satirical ways. Moreover, by being uncertain about the exact content of the regulation, the duty holders are more likely to not use the technology or to use it in a way that they would have originally intended. This is the so-called chilling effect.
Arguments against regulating deepfakes in the electoral process
Others argue that there is no need to explicitly regulate deepfakes, as they are just a technology used to disseminate fake news. Edited images and videos have always existed, they just had to be manually created, not with the help of artificial intelligence. This position is debatable because the speed of the manipulation and the number of people reached are not the same as they were in the past. Especially when it gets to the stage where there is no need to pre-follow a particular video content because it can mimic a person in real time. Or when an entire fake “news channel” of only deepfakes is created to spread Chinese propaganda. Such deepfakes, based on what we have seen so far (fortunately not too broad), are not spread in the mainstream media or on the most watched news channels, but on smaller portals, instant messaging apps, forums, etc. Nevertheless, they can reach many more people, much faster and they have a greater impact, as the pre-election campaign period is a very tense and sensitive period, any discrediting fake news has a greater impact than it would otherwise have.
Possible regulatory options
The active involvement and preparedness of several actors are necessary to effectively counter the harmful effects of deepfakes. Voters are expected to read between the lines. They should be aware of the existence of deepfake technology, its harmful effects, and the tell-tale signs. The media must also be prepared if they are to report credibly on the electoral process. They need to develop a practice of reporting on deepfake incidents in a transparent and understandable way. Election management bodies also have a responsibility to ensure the integrity of elections. They should organize media campaigns to raise awareness of the dangers of deepfakes.
As far as the state’s regulatory options are concerned, I believe that it would be advisable for the legislator to make it compulsory to flag deepfake content. This rule would obviously only be respected by content creators in the case of content produced for entertainment purposes, as no one would voluntarily “deactivate” deceptive content. Platforms should therefore be responsible for ensuring that these labelling systems work properly. In the United States, the DEEPFAKE Act provides for up to five years imprisonment and huge private liability for breaches of the marking obligation. The EU’s proposed AI Act would impose the same obligation on content producers.
 See Mráz, Attila. 2021. “Deepfake, Demokrácia, Kampány, Szólásszabadság.” In A Mesterséges Intelligencia Szabályozási Kihívásai, 249–277.
Gellért MAGONY is a student at the Faculty of Law and Political Sciences of the University of Szeged and a scholarship student of the Aurum Foundation. His main area of interest is the relationship between the digital world and law. His previous research has focused on the relationship between social networking sites and freedom of expression and the statehood of metaverses. He is currently researching social influence through deepfakes.