McGill Policy Association

View Original

In Disguise: How Much Longer Must Women Be Sexually Exploited Online?

Image By: MIT Technology Review

With the advent of AI technology becoming a defining factor within our current political and online climate, the world has been propelled into a state of unknown possibilities. The current definition of AI has expanded beyond what was presumed to only constitute realistic-appearing robots. Within the past few years, the web has absorbed such technology into the fabric of its being, with sites such as Chat  GPT now recognized as common knowledge. The end of 2023 seems to mark a fade for the initial glitz and glam era that characterized the launch of these programs, not only through its integration into everyday life, but also through an examination of AI’s use in significant human rights violations over the years. 

A particular aspect of AI technology that has gained traction has been the prolific use of deepfake imagery. Deepfakes are purely digitized images or videos that involve the use of AI to mimic the expressions and actions of real people, animals, or items. Although deepfakes encompass any sort of displacement of someone’s features or a manipulation of one’s actions, they almost exclusively have been used to represent people in situations that they have not engaged in. In the political sphere, this issue has been reported a multitude of times. such as when videos of important figures surface of them saying inappropriate comments—such as Mark Zuckerberg’s announcement on having “total control of billions of people’s stolen data.” AI technology appears to be at the forefront of current news, such as the growing existence of online generated essays in university classrooms, but deepfakes have existed for a long time. Despite its extended history, there are still discussions being made on how to regulate it, especially in relation to its more detrimental usage—pornography.

Deepfakes have been most commonly found in illegal and nonconsensual pornography, with nearly 96% used for this purpose. Furthermore, within these deepfake reserves, 99% of these videos have displaced the faces of female celebrities onto pre-existing sexual content. In fact, the emergence of deepfakes originated in pornography, with the first instances being traced to a Reddit post in 2017 that mapped popular figures like Taylor Swift and Scarlett Johansson onto inappropriate videos. At the time, when these sorts of AI technology were in their developing stages, creating deepfakes were time consuming. It involved several programs which first detected similarities between thousands of images between the two individuals who were to be superimposed. In order to “swap” the face, one would have to feed the opposite compressed images into a form of technology called a decoder, that then reconstructed the facial expressions on the opposing person. This was simply one of the ways to develop deepfakes, but in today’s highly advanced digital expanse, exchanging facial features is easier than it's ever been. 

With the growing ease of finding and using artificial technology, deepfakes are now made at higher rates than they have ever been before, since what used to cost thousands to produce can now be created with only a few dollars. In 2023 thus far, more than 500,000 deepfake videos have been shared across social media alone—not accounting for porn sites, where this sort of material resides. It is evident why deepfakes of this sort are problematic: they lack consent from the women appearing in pornography, lead to detrimental consequences on one’s mental health and reputation, and increase the possibility for illegal usage of minors in such contexts. With the growing relevance of deepfake porn as a human rights violation, more and more women have begun to speak out about their experiences as victims to deepfake AI. In an article published to The Guardian by Helen Mort, she recounts the feeling of shame and confusion that took root after first discovering these videos through a third party individual. Her accounts reflect a commonly shared notion by other targets to this phenomenon: worry, fear, anxiety—all triggered the ideological misalignment of one’s self in a distant, yet violent and invasive situation. Those who have never experienced being used in deepfake technology may not quite understand the psychological implications that it can result in, since there is no direct injury being performed.

 This only scratches the surface on how this technology affects women, many of whom have jobs, children, husbands, and established reputations. However, when it comes to the implementation of underaged girls in deepfakes, the problem becomes significantly more dire. Those attempting to tackle the issue of falsified child pornography or abuse images have to approach the issue on two levels. Most of the time, government officials go chasing after creators of this content, only to find it to be virtual. Real pictures of abuse victims in pornography are reused in many deepfakes, which makes the task of identifying the original abusers, as well as those who manufacture the new videos, far more difficult. In recent months, there have been warnings around an explosion of AI-generated child pornography spreading across the Dark Web. The creation of synthetic images for this category of illicit material makes it extremely complicated to track down, as opposed to real media that have digital footprints. Deepfaked child pornography sparked particular discussion this past April when a man from Quebec was sentenced to prison for generating thousands of files on fabricated pornography using the faces of underaged children. 

With these issues expanding into greater public spaces, governments across the world have brought forth policy changes to tackle the gradual increase of deepfakes. It has been of particular importance for governmental institutions because of how AI-technology has been used to misconstrue political figures. Canada, for example, approaches deepfake regulation with a three-step method: prevention, detection, and response. Deepfake cases have yet to make it to the courts in Canada, but through the integration of privacy torts, perpetrators can be charged under criminal law, and the victims may receive relief. In the United States, however, federal laws regarding the creation and spread of deepfake porn have not yet been established. Chillingly, governments may never be able to find a true solution to the problem as they cannot exercise full autonomy on the regulation of media. 

Law enforcement agencies across the world have displayed their apprehension for the impact of AI technology “on the way people perceive authority and information media,” stated Europol in a report on deepfakes last year. The Chinese government introduced a new policy that hopes to regulate the administration of deepfakes through a digital signature and watermark system that ensures consent from all participants in any content, as well as ways for these deepfake providers to adequately dispel rumors. The worry remains with anonymous creators who are far harder to track down and hold accountable, and but there is hope as several countries have begun to roll out specific plans of action to tackle AI activity—such as the newly established Artificial Intelligence Task Force (AITF) in the U.S which aims to “apply Al to digital forensic tools to help identify, locate, and rescue victims of online child sexual exploitation and abuse, and to identify and apprehend the perpetrators.” European Parliament has also formulated a few policy options in regards to targeting illegal uses of AI, such as institutionalizing support for victims of deepfakes or creating legal obligations for deepfake technology providers. In addition to governmental efforts, controlling accessibility to illegal deepfakes lies in the hands of online search engines and social media platforms, particularly Google. 

The first step that many tech policy lawyers have been taking has been an attack on the deepfake websites that allow for easy creation of this material. Since these websites utilize the technology and services of larger, publicly traded companies, such as PayPal Holdings Inc. and Visa Inc., they are able to function as pseudo-corporations and stay afloat. In addition, progressing toward prevention and detection technology depends almost exclusively on the policies of web spaces where such pornography proliferates, including porn sites and even our most used apps—like TikTok and Instagram. Although activists have voiced their opinion for a change in regulation within online domains, not much has been advanced in this field. As technology begins to blend together, it becomes increasingly difficult to target one aspect within. Thus, the solution to dismantling deeptake technology involves holding larger companies accountable for fueling the existence of smaller sites that then engage in illegal activity to create the industry of deepfake pornography.