Understanding Deepfakes: From Creativity to Misuse
Key Takeaways
Deepfakes offer numerous legitimate applications, provided they respect individuals’ rights. However, malicious uses continue to dominate public perception. As the legal framework is evolving, efforts to tackle illicit deepfakes are expected to intensify, leading to an increase in prosecutions against offenders in the coming months.
Deepfakes have a poor reputation. Media reports frequently highlight cases where deepfakes distort the image of public figures for ridicule (e.g., videos portraying President Macron as a garbage collector or Pope Francis in a puffer jacket), to spread false information (e.g., assassination attempt on President Macron, Donald Trump in prison, or Taylor Swift endorsing President elect Trump during the U.S. presidential election), or to harm their reputation (e.g., Emma Watson or Taylor Swift portrayed in sexually explicit deepfakes). AI tools have made it relatively easy to create video and audio montages.
Nevertheless, deepfakes have many lawful applications, provided they respect individual rights. Unfortunately, malicious uses remain widespread. The legal framework is evolving, particularly with new provisions introduced under the French SREN Law and the European AI Regulation (AI Act). These developments aim to strengthen the fight against unlawful deepfakes, with an increase in prosecutions against fraudsters anticipated in the months ahead.
1. Deepfakes Are Not Unlawful - In Principle
1.1 What Is a Deepfake?
A deepfake is a modified image or video. Deepfakes are created using “deep learning” or “machine learning” models, artificial intelligence techniques that leverage vast amounts of training data to produce content that mimics reality, including faces, voices, and movements. (1) Deepfakes do not necessarily involve altering a person’s appearance; they can also modify objects (e.g., images, paintings, works of art) or locations.
The term "deepfake", (or "hypertrucage" in French), is defined as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.” (2)
The development of deepfake technology is relatively recent, originating around 2016-2017 with the emergence of tools like DeepFaceLab and FakeApp, which were designed specifically for creating such content.
1.2 Deepfakes Are Lawful, in Principle
Deepfakes represent a new form of expression and communication and are therefore protected under the right to freedom of expression.
They are used across various fields, including art (creating images with software such as MidJourney or Dall-E, or producing songs using the voices of deceased artists), cinema (digitally "de-aging" actors such as Harrison Ford in the latest Indiana Jones film or Robert De Niro and Al Pacino in The Irishman), or advertising (campaigns like Spotify’s, which used an image of artist The Weeknd).
1.3 Conditions for the Lawful Use of Deepfakes: Consent, Transparency, and Respect for Intellectual Property
The legality of a deepfake depends on specific conditions, which have been clarified through the adoption of the French Security and Digital Space Regulation Law (SREN Law) on May 21, 2024, and the European AI Regulation (AI Act) on June 13, 2024. (3)
- Consent and Transparency
The SREN Law primarily addresses deepfakes depicting a person’s image and/or voice. Article 226-8 of the French Penal Code has been amended to include requirements for consent and transparency. Under these provisions the individual whose image or voice is used in a deepfake must give their consent, unless the use of AI or algorithmic processing is explicitly disclosed, or the artificial nature of the image or voice is obvious.
Determining whether the use of AI is "obvious" can be subjective and may lead to interpretative challenges. The test for "obviousness" in a photo or video deepfake should be assessed based on the perspective of a reasonably attentive person, without requiring detailed or technical analysis of the content.
These provisions apply to all types of deepfakes, whether for entertainment, art, cinema, or commercial purposes, and regardless of whether the individuals depicted are celebrities or private individuals.
At the European level, the AI Act further reinforces transparency requirements. Under the Regulation, deepfakes fall into the category of “limited-risk AI systems”, which are considered to have minimal impact on decision-making processes. (4) Consequently, the regulation imposes few restrictions on the marketing and use of such AI systems.
However, the AI Regulation establishes a transparency obligation for users ("deployers") of these AI systems. Images, audio, or video content constituting a deepfake must clearly and visibly indicate that they were generated or manipulated by AI. This requirement also applies to deepfakes of an artistic, creative, satirical, or fictional nature, provided it does not hinder the display or enjoyment of the work. This transparency obligation does not apply in cases where the use is legally authorized for purposes such as the prevention, detection, investigation, or prosecution of criminal offenses. (5)
These provisions of the AI Regulation will come into effect on August 2, 2026.
Even if a deepfake meets transparency requirements by disclosing the use of AI, it must also respect individuals' rights, including the right to privacy and image, personal data protection rights, and intellectual property rights.
- Respect for Intellectual Property
If a deepfake incorporates a work protected by intellectual property (e.g., music, paintings, photographs), the creator must obtain prior authorization from the author or their rights owners. Exceptions include cases where the deepfake qualifies as a parody or short quotation under the French Code of Intellectual Property. (6) For example, a deepfake that includes a song or film excerpt in a comedic program and meets the criteria for parody could be created and distributed without obtaining permission from the rights owners.
2. Many Deepfakes Are Considered Unlawful
The negative reputation of deepfakes stems from their malicious and/or unlawful use, often created and distributed in violation of individuals' rights, whether the rights of the persons depicted or the authors of the works used.
2.1 A Significant Number of Deepfakes Are Fraudulent
According to a report published by Deeptrace in September 2019, 96% of deepfake videos are pornographic in nature. Eight of the ten most-visited pornographic websites host deepfake content, and 99% of the individuals targeted are women. (7)
While the use of deepfakes has significantly diversified since 2019, illicit and fraudulent deepfakes still account for a substantial proportion of those circulated.
2.2 Categories of Fraudulent or Malicious Deepfakes
Malicious deepfakes created through identity theft can have far-reaching consequences, including economic or political destabilization (e.g., disinformation or manipulation of public opinion) or harm to an individual’s reputation (e.g., invasion of privacy).
- Identity Theft
Deepfakes can be used to spread false information. For instance, using deepfake technology to depict a celebrity endorsing a presidential candidate, a fabricated attack on a political figure, or false statements attributed to a politician can manipulate public opinion and distort democratic processes.
Similarly, deepfakes leveraging a celebrity’s likeness to promote fraudulent financial schemes or a business leader announcing false financial results can result in severe economic harm to individuals and businesses alike.
- Fraud
Another category of illicit deepfakes involves pornographic content, which may constitute fraud. These manipulated photos or videos replace the original face with that of the victim, who may be a celebrity or a private individual. This type of fraud may also be accompanied by blackmail or cyberbullying targeting the victim - offenses that are punishable under criminal law. (8)
- Violation of Privacy
From a civil law perspective, the unauthorized use of a person’s image or voice may also constitute a violation of privacy as guaranteed by Article 9 of the French Civil Code.
2.3 Strengthened Efforts to Combat Unlawful and Malicious Deepfakes
The legal framework for addressing deepfake-related violations such as privacy and image rights violations, extortion, blackmail, and identity theft was previously incomplete. The SREN Law, the AI Regulation, and the GDPR have introduced measures to enhance the fight against unlawful and malicious deepfakes.
- Deepfakes Created Without the Person’s Consent
Article 226-8 of the French Penal Code, as amended by the SREN Law, sets penalties for the distribution of a deepfake depicting a person’s image or voice without their consent. This applies in cases where the artificial nature of the image or voice is not clearly apparent, or there is no explicit mention of the use of AI. The penalty includes one year of imprisonment and a €15,000 fine. If the deepfake is shared online, such as on social media, the penalties increase to two years of imprisonment and a €45,000 fine.
Additionally, under the GDPR, a person’s image and voice are considered personal data. Using such data without consent can result in fines of up to €20 million or, for companies, 4% of the previous year’s global annual turnover, whichever is higher. (9)
- Failure to Comply with Transparency Obligations
The AI Regulation imposes a transparency requirement for deepfakes. Non-compliance can result in administrative fines of up to €15 million or 3% of the creator’s total global annual turnover from the previous year, whichever is higher. (10)
- Pornographic Deepfakes
The SREN Law introduced a specific offense for creating pornographic deepfakes without the consent of the individual depicted.
Article 226-8-1 of the French Penal Code now provides that the distribution of a pornographic deepfake representing a person’s image or voice without their consent is punishable by two years of imprisonment and a €60,000 fine. If the offense is committed online, the penalties increase to three years of imprisonment and a €75,000 fine. (11)
- Cyberharassment
Sexual deepfakes, as well as those targeting minors, vulnerable individuals, or elected officials, can also constitute cyberharassment.
Cyberharassment is defined as “the act of harassing a person through repeated statements or behaviors that aim to or result in a deterioration of their living conditions, manifesting as harm to their physical or mental health (...)." When cyberharassment is committed using an online public communication service or via a digital or electronic medium, it is punishable by two years of imprisonment and a €30,000 fine. (12)
- Unauthorized Use of a Copyrighted Work
Deepfakes are also subject to intellectual property law. Using a copyrighted work, such as music or photographs, to create a deepfake without the permission of the author or rights owners constitutes copyright infringement. Copyright infringement penalties can reach up to three years of imprisonment and a €300,000 fine. (13)
In conclusion, if you plan to use deepfakes as part of your professional activities, it is essential to consider the principles of consent, transparency (disclosing the use of AI), and copyright compliance. Consulting an attorney can help ensure the legality of a deepfake.
Additionally, if you or your company falls victim to a malicious deepfake, it is advisable to seek legal counsel to analyze the nature of the deepfake and assess the feasibility of pursuing legal action against the perpetrator of the unlawful content.
(1) Unlike a deepfake, a montage is created manually or using non-AI software through traditional editing or retouching techniques.
(2) Regulation (EU) 2024/1689 of June 13, 2024, laying down harmonised rules on artificial intelligence (AI Act), Article 3.
(3) Law No. 2024-449 of May 21, 2024, “Security and Digital Space Regulation” (Loi Sécurité et Régulation de l’Espace Numérique)
(4) AI Act, Recital 53.
(5) AI Act, Recitals 132 to 134 and Article 50 (4) and (5).
(6) Article L.122-5 of the French Code of Intellectual Property.
(7) Deeptrace, The State of Deepfakes: Landscape, Threats, and Impact, September 2019, cited during the review of the SREN Bill in the French Senate, July 2023.
(8) See Articles 312-1 (extortion), 312-10 (blackmail), and 222-33-2-2 (cyberharassment) of the French Penal Code.
(9) General Data Protection Regulation (GDPR), Article 83.
(10) AI Act, Article 99 (4).
(11) SREN Law, Article 15 amending Article 226-8 of the French Penal Code, and SREN Law, Article 21 creating new Article 226-8-1 of the French Penal Code.
(12) Article 222-33-2-2 of the French Penal Code.
(13) Article L.335-2 of the French Code of Intellectual Property.
Bénédicte DELEPORTE
Avocat
Deleporte Wentz Avocat
www.dwavocat.com
December 2024