Submission
Privacy Policy
Code of Ethics
Newsletter

Regulating Pornographic Deepfakes—the Origin of the “Problem”

According to statistics, pornographic use makes up 96% of deepfake footage, most of which is made without consent. In fact, deepfakes made their way into public consciousness via pornographic use. In December 2017, a trend started on Reddit, where users replaced the faces of porn actresses in already existing adult films with the faces of famous movie stars. The pornographic deepfakes flooded the site and the trend continued for weeks before the deepfake subreddit was finally banned in February 2018, with Discord, Twitter, and Facebook following suit.

Pornographic deepfakes are a serious threat. They can be used to create fake revenge porn videos or to blackmail people into doing things they don’t want to do. Or they can be simply created to generate profit via ads or to spread misinformation. But the creation of them can have a devastating impact on the victim’s reputation, career, and personal life, it can even lead to victims being harassed, stalked or assaulted.

There are two types of AI-generated images: “classical” deepfakes and “nudified” images. The latter means that after uploading an image of a real person, a convincing nude photo can be generated using free applications and websites. While some of these apps have been banned or deleted (e.g., DeepNude was shut down by its creator in 2019 after strong opposition), similar apps keep appearing in their places.

In recent years, there has been a growing awareness of the dangers of pornographic deepfakes, several countries have passed laws that make it illegal to create or distribute deepfakes without consent. However, as deepfake technology is still evolving rapidly, lawmakers and technology companies are struggling to keep up. In this article, we will explore the current regulations of some US states and other countries and highlight some of their special characteristics.

Legislative attempts in the USA

Currently, in the US, there are no laws on the federal level. There was a now-dead act, the DEEP FAKES Accountability Act, which would have required producers of deepfakes to comply with certain digital watermark and disclosure requirements and would have established new criminal offenses related to the production of deepfakes that do not comply with these requirements, and for the alteration of deepfakes to remove or meaningfully obscure such required disclosures. A violator would have been subject to a fine, up to five years in prison, or both. The act would have established civil penalties and would have permitted individuals to bring civil actions for damages. Also, the software manufacturers who reasonably believed their software would be used to produce deepfakes should have ensured it had the technical capability to insert those watermarks and disclosures.

Talking about the state level, several of them have effective or pending regulations for pornographic deepfakes (Virginia, Georgia, Minnesota, California, Texas, and New York). The first state in the United States of America to regulate non-consensual pornographic deepfakes was Virginia. The state’s deepfake rules are part of their state code, Code Virginia, and are an amendment of the already existing revenge porn rules, the criminal provision “unlawful dissemination or sale of images of another”. The amendment states that the victim can be „a person whose image was used in creating, adapting, or modifying a videographic or still image with the intent to depict an actual person and who is recognizable as an actual person by the person’s face, likeness, or other distinguishing characteristic.” Therefore, the rules refer to all digitally manipulated media not just deepfakes, and not just to videos but also images. Violating the rule is a Class 1 misdemeanor, which carries up to 12 months in prison and up to $2,500 in fines.

Georgia’s rules are very similar. In the Official Code of Georgia, the criminal provision is called “prohibition on nude or sexually explicit electronic transmissions” and also not deepfake-specific but talks about “falsely created videographic or still images” that are a broader category. The punishment for the violation is imprisonment for up to five years, a fine of up to $100,000, or both.

Minnesota on the other hand has a not-yet-effective bill in committee, that mentions the word deepfakes and hopefully will soon be part of the Minnesota Statutes. Even though the text gives a definition and mentions the word deepfakes, it applies to images too, and—as in the aforementioned states—according to the definition, it is also possible to commit this crime by creating fake media the old-fashioned way, for example with CGI or Photoshop.

In Hawaii, the crime is called “violation of privacy in the first degree” and is part of the Hawaii Revised Statutes. Additionally, to other states’ laws, this definition requires the “intent to substantially harm the depicted person”. The violated interests harmed might be “health, safety, business, calling, career, education, financial condition, reputation, or personal relationships, or […] revenge or retribution”.

We can see from the wording of these laws, that they are not actually deepfake-specific laws, contrary to what most of the legal literature and educational articles state, which is, incidentally, not a problem. The legislators were motivated by the rise of deepfakes but created a technology-neutral regulation. Using “obsolete” technology, such as photoshopping an image, if it is made with the same intent as pornographic deepfakes are, is rightfully regulated the same way. This regulative approach supports the more lenient view towards deepfakes and does not hurry a strict, specific, or even banning legislation, and believes that current legal frameworks are plenty enough.

All the above-mentioned laws are of criminal character. California’s law on the other hand establishes private rights of action for victims against a person who either creates or intentionally discloses sexually explicit material without the consent of the depicted person. Victims are entitled to be compensated for damages in an amount equal to the monetary gain made by the defendant from the creation, development, or disclosure of the sexually explicit material and other specified economic damages, or specified statutory damages, punitive damages, and reasonable attorney’s fees and costs. The law fixes a limitation period of three years from the date discovered by the victim.

Legislative attempts in other countries

Other countries are also working on banning pornographic deepfakes. The United Kingdom has an almost effective new law, the Online Safety Bill. Contrary to most educational articles that introduce this bill as a deepfake regulation, it only adds an amendment to the crime “Sending etc. photograph or film of genitals” of the Sexual Offences Act, where a photograph can be “an image, whether made by computer graphics or in any other way, which appears to be a photograph or film”. Again, this definition is far broader than deepfakes.

South Korea unlike other countries prohibits not only the distribution of nonconsensual deepfakes but also the creation of them.  The deepfakes are called “false video products” which “may cause sexual desire or shame against the will of the person who is subject to video.” Under this new law, the distribution can be punished with up to five years in prison or up to 12 years if the perpetrator sold access to them. The law does not require malice or intention of harm on the part of the creator and expressly prohibits the creation of nonconsensual pornographic deepfakes for the purposes of sexual gratification, as well as defamation. This means that in court, the victims do not need to prove that they were harmed by the creation of the deepfakes only that they exist at all. South Korea’s law is quite restrictive. One of the contributing factors might be that deepfake creators tend to target female K-pop singers more often than celebrities of other nationalities. Another important aspect might be The Nth room case, in which hundreds of women and dozens of minors were targeted with horrific acts of sexual violence, including rape and physical assault as well as the creation of pornographic deepfakes and other forms of nonconsensual pornography.

China’s regulation (Deep Synthesis Management Provisions) by the Cyberspace Administration of China from late 2022 is even stricter. The Provisions are significant because they are the first comprehensive regulations governing the use of deep synthesis technology in the world. The Provisions reflect the Chinese government’s concerns about the potential for deep synthesis technology to be used to spread misinformation, damage reputations, and undermine social stability. It imposes obligations to both deep synthesis service providers and users.  Deep synthesis service providers must register with the Cyberspace Administration of China and provide detailed information about their services. They must implement technical safeguards to prevent the spread of illegal and harmful content. This includes using watermarks or other labels to identify deep synthesis content and developing algorithms to detect and remove deepfakes. Users must use deep synthesis technology in a responsible manner and avoid using it to create or disseminate illegal or harmful content. They are required to label deep synthesis content as such to avoid misleading others. There are some prohibited topics and uses: “information inciting subversion of State power or harming national security and social stability; obscenity and pornography; false information; information harming other people’s reputation rights, image rights, privacy rights, intellectual property rights”. It is also prohibited to engage “in activities harming national security, destroying social stability, upsetting social order, violating the lawful rights and interests of others, or other such acts prohibited by laws and regulations.”

The European Union, on the other hand, is not yet prepared for the combat against pornographic deepfakes. On the EU level, labeling obligations appeared in both the DSA and the upcoming AI Act. The DSA obliges very large online platforms to label deepfakes, the AI Act (after adoption) will require the content creators to tag their deepfake content. The trend on the member state level shows a lack of dedicated pornographic deepfake laws. They are usually subsumed under existing criminal offences such as extortion, violation of privacy, or sexual harassment.


Gellért MAGONY is a student at the Faculty of Law and Political Sciences of the University of Szeged and a scholarship student of the Aurum Foundation. His main area of interest is the relationship between the digital world and law. His previous research has focused on the relationship between social networking sites and freedom of expression and the statehood of metaverses. He is currently researching social influence through deepfakes.

Print Friendly, PDF & Email