The governance of AI-generated pornography: from deepfakes to synthetic animation

Par Aurélie Petit

Scotiabank Doctoral Fellow in AI and Inclusion, AI + Society Initiative, University of Ottawa

Centre de recherche en droit, technologie et société
Initiative IA + Société
Centre de recherche en droit, technologie et société
Intelligence artificielle
La violence et les abus facilités par les technologies
Blog
image of photo manipulation
Phil Shaw
Les deepfake pornographiques peuvent être préjudiciables et abusives. Cependant, tous les médias visuels synthétiques sexuels n’appartiennent pas à la même catégorie. Les décideurs et décideuses politiques doivent naviguer cette complexité pour éviter d’entraver injustement la liberté d’expression en ligne et de cibler les créateurs de contenus pour adultes.

L’Initiative IA + Société est ravie de continuer à amplifier les voix des chercheurs émergents du monde entier grâce à notre série de blogues bilingues « Voix émergentes mondiales en IA et société » qui font suite aux ateliers mondiales des jeunes chercheurs en IA et la régulation et de la conférence Shaping AI for Just Futures. Cette série met en avant les recherches innovantes présentées lors de ces événements – et au-delà – afin de faciliter le partage des connaissances et d’engager un réseau mondial dans un dialogue interdisciplinaire constructif.

Le blog suivant a été écrit par Aurélie Petit qui a été sélectionnée par un comité international pour présenter son  affiche  « The governance of AI-generated pornography: From deepfakes to synthetic animation » dans le cadre de la session d’affiches de la conférence Shaping AI for Just Futures.

Cette contribution scientifique n’est disponible qu’en anglais.

The governance of AI-generated pornography: from deepfakes to synthetic animation

By highlighting the existing similarities between AI-generated pornographic animation and deepfake pornography, we highlight the need to create regulatory frameworks for governing synthetic visual sex media that properly acknowledge the weight and differences of their respective harms. This approach will help to avoid the unfair targeting of adult content creators who produce non-abusive computer-generated sex media and can fall victims of the unclear governance of deepfake pornography.

What is synthetic pornography?

In late January 2024, a pornographic video using images of hyper-celebrity Taylor Swift generated a spree of online discourse, pushing Canadian politicians to speak up on this issue to a scale not seen before. The video was a deepfake: a form of manipulated media produced through advanced AI techniques. Deepfakes involve the use of artificial intelligence algorithms to superimpose or replace someone's likeness in video or images, often in a realistic and deceptive manner. Shortly after, Canada’s federal government announced its plan to ensure that sexually explicit deepfakes will be included in future legislation governing online harms. 

This decision did not come as a surprise. If confronted with realistic visual manipulations of yourself depicted in a sexual situation to which you did not consent, your first reaction would probably be one of shock. As a digitally literate user, you would likely articulate this reaction with a statement like: “I am a victim of deepfake pornography”.

In Deepfakes: A Real Threat to a Canadian Future, the Canadian Security Intelligence Services reports that ‘over 90 percent of deepfakes available online are non-consensual pornographic clips of women … Women are almost always the non-consenting targets or subjects of pornographic deepfake videos, and current legislation offers victims little protection or justice’. As Canadian policy makers work to include it in legislation, the governance of deepfake pornography remains a pressing topic at the center of contemporary women’s digital rights.

But the shock that comes with witnessing a deepfake one oneself does not always happen for less realistic sexual depictions. Consider how you might react to viewing an animated stick figure, with your name printed above it, engaging in a similar sexual act. When confronted with less realistic images, the intuition that we are a victim of something profoundly harmful tends to diminish. This distinction—between the hyper-realistic deepfake animation and the abstract stick figure rendition—reminds us that synthetic animated media exists on a spectrum.

Even though in both situations the image has been fabricated with AI tools, we have very different intuitions regarding the scale of their harmfulness. Because synthetic, hyper-realistic visual media such as deepfakes challenges the line often traced between animation and indexical images captured by a camera, this tension of a live-action media that is not live-acted must inform its governance, all while considering its coexistence with other forms of non-realistic synthetic visual media. 

Policy tensions

As policymakers rush to articulate taxonomies of harm over synthetic visual media, it will be imperative that governance responses to deepfakes consider their existence alongside other forms of non-realistic synthetic visual media that are themselves more akin to computer-generated animation than to live-action.

Since the harms of deepfakes have been considered comparable to real harm, the regulatory urge has been to develop strategies to quickly ban any rendition from video and social media platforms, often relying on automated detection technologies. However, when no distinction is made between (i) non-realistic synthetic pornography, (ii) deepfake pornography, and (iii) live-action pornography, it unfairly harms animated porn creators who produce non-abusive material.

Policies of deepfake pornography cannot be framed around live-action media governance by centering the elimination of all ‘non-live-action porn’ for which we do not have access to performers’ biometrics data (and thus cannot verify consent or age). This risks rope in non-abusive synthetic media and computer-generated porn made by animation workers who cannot provide IDs or consent forms for fictional characters, and whose fanart content has a long history of representing existing celebrities.  

Once this tension between multiple forms of manufactured visual pornography (animation to synthetic) that can look like each other is acknowledged, we can refine policies over deepfakes to prevent them from unfairly targeting non-abusive sexual media. It means that we can then challenge the reliance on automated detection technologies that are not equipped to differentiate between animation made by a human or generated using AI.

The Ariana Grande’s cases

To explore this, we will investigate three cases of non-live-action pornography from non-abusive sex media. In all cases, these could be found under the definition of ‘non-consensual sexual representation’ or ‘computer-assisted pornography,’ and yet they are not deepfakes nor are they inherently harmful.

The cases chosen are porn media made using the liking of pop singer and actress Ariana Grande. After rising to fame as a child TV actress in the late 2000s, Grande went on to lead a very successful singing career as one of the world’s best-selling music artists. Along with her fame, Grande amassed an impressive fanbase with 376 million followers on social media platform Instagram. An unintentional consequence of this massive following has been the continuous production of non-consensual porn media representing her, ranking in form from 2D cartoons to deepfakes.

Three examples are a porn cartoon image of a character identified as Grande, an animated porn video made using a 3D model from her character in the videogame Fortnite, and a non-realistic synthetic porn image generated by an algorithm. Each of these media is user-generated content and was collected on online platforms, where it was first shared. Despite that these media are not hyper-realistic, Grande is identified by her name and by the known aesthetic choices she made during the early stages of her singing career: a long, high ponytail, a miniskirt, and high heels. These details make her easily recognizable in fanart, and therefore a popular choice for porn media creators.

Additionally, Grande was chosen for this study precisely because her fame provides her with an added layer of security against non-consensual representations, and we can assume that she suffers less professional and personal repercussions from them as opposed to non-celeb victims. Because celebrities’ image manipulations often receive major press coverage, they have the power to shape policies, as we saw in the aforementioned Taylor Swift example. However, few comparisons are done between deepfakes of women celebrities (who are the most common victims) and fanarts made using their image, whether these be computer-generated or non-realistic synthetic media.

The need for nimble policy intervention

By looking into porn images of Ariana Grande, we find a productive ground to think through the governance of multiple forms of synthetic visual porn media, and the need to be precise on where we draw this line between hyper realistic and non-realistic images.

Policies must consider (i) the spectrum of harms that exist between deepfakes and non-realistic synthetic media that look like animation (even though both are AI assisted content), and (ii) the animation workers that can be unfairly targeted when this spectrum is flattened in favor of live-action media governance. 

This distinction is further needed as previous attempts to regulate animated sex media over the last years have impacted the bodily, financial, and social autonomy of animated pornography content producers. In December 2023, the streaming platform Twitch announced a change in its policies for sexual content as an answer to criticism of poorly managed moderation techniques which could (and has) unfairly targeted women and marginalized users. Some nudity in in-person content was now permitted if the creator made sure the stream was classified as containing sexual themes or mature content. More surprisingly, streamers were now allowed to depict ‘fully exposed female-presenting breasts and/or genitals or buttocks regardless of gender’ if the content was drawn, animated, or sculpted. While the policy was presented as a means of ‘allow[ing] the thriving artist community on Twitch to utilize the human form in their art’ (Twitch, 2023), the policy was quickly reversed following a surge of sexually graphic illustrated content on the platform.

The following day, ‘depictions of real or fictional nudity’ were no longer allowed on Twitch. As the platform explained they went ‘too far with this change’ and that ‘digital depictions of nudity present a unique challenge’ because ‘AI can be used to create realistic images’ (Twitch, 2023). As the platform rushed to moderate what it saw as unlawful illustrated content, it unfairly targeted dozens of animation content creators –some with half a million subscribers– who were automatically banned for breaching policies despite keeping within the new rules announced right before. Once these content creators were associated with workers of the adult industry, their labor was seen as disposable and as something the platform simply tried to get rid of as quickly as possible. 

This recent case exemplifies the consequences of grouping all sex media under the same umbrella. Instead, when legislators articulate policies for deepfakes to be banned, there must be a regulatory awareness of the differences in harms between hyper-realistic and non-realistic synthetic visual sex media, and thus an acknowledgement of the different forms that pornography can take.    

In order to outline what should be taken into consideration in deepfakes regulation, we propose that we should not isolate synthetic visual sex media from the historical ecosystem of other computational pornography, such as CGI porn or adult content made using video games engines. Doing so helps to understand which challenges are inherent to hyper-realistic synthetic visual media governance (as in access, AI literacy, sexual education, and scale), and will help to prevent non-abusive sexual media from instead being unfairly targeted.

Ressources pour en savoir plus

Partnership on AI (February 27, 2023). PAI’s Responsible Practices for Synthetic Media.

Prud’homme, Benjamin, Régis, Catherine & Farnadi, Golnoosh (eds.) (2023). Missing links in AI governance, UNESCO/MILA.

Goddard, Valentine (June 2, 2023). Art Impact AI Coalition – Support Artists Voices on the Future of AI, Change.org.

Lapointe, Valerie A. & Dube, Simon (April 9, 2024). AI-generated pornography will disrupt the adult content industry and raise new ethical concerns, The Conversation.

Gaur, Loveleen (ed.) (2022). DeepFakes: Creative, Detection and Impact, CRC Press. 

À propos de l'auteure

Aurélie Petit is a PhD Candidate in the Film Studies department at Concordia University, Montréal. She specializes in the intersection of technology and animation, with a focus on gender and sexuality. During the Summer 2023, she was a PhD Intern at Microsoft Research where she worked on the limits of applying live-action governance frameworks to animated pornographic media. She is currently a Scotiabank Doctoral Fellow in AI and Inclusion at the AI + Society Initiative at University of Ottawa, working with Professor Jason Millar and the CRAiEDL on the ethics of deepfake pornography.

 

This content is provided by the AI + Society Initiative to help amplify the conversation and research around the  ethical, legal and societal implications of artificial intelligence. Opinions and errors are those of the author(s), and not of the AI + Society Initiative, the Centre for Law, Technology and Society, or the University of Ottawa.