In the age of AI advancements, the lines between reality and deception on our screens are becoming increasingly blurred. A concerning phenomenon in this regard is the rise of “deepfake porn,” where creators offer paid services to produce explicit content featuring chosen individuals as requested by buyers. Specialized deepfake websites circulate numerous not-safe-for-work videos, contributing to its growth.
Deepfake porn relies on AI technology accessible through various apps and platforms. These sophisticated algorithms utilize deep learning techniques to digitally remove clothes from images of women and replace them with explicit content. Although men can also be targeted, the training data for these algorithms primarily consist of images of women.
The synthetic sexual images generated by AI are inherently fictional, depicting events that never occurred. However, the troubling aspect is that they are trained using real people’s images, often obtained without their consent. In the vast online landscape, distinguishing between consensual and non-consensual distribution of these images becomes increasingly challenging.
It is essential to recognize that creating fake erotic content is not inherently unethical, as online spaces can provide a safe environment for individuals to explore. The problem arises when these fabricated explicit images are disseminated without the subject’s consent, leading to profound harm and invasion of privacy.
Most people think of deepfakes as manipulated video clips of world leaders, designed to create confusion. The most pressing threat of deepfakes is not politics, but porn. Research shows around 96 per cent of deepfake videos are pornographic, almost 100 per cent of them involving non-consenting women.
Read more about the implications and concerns regarding deepfake technology on our Deepfake Concerns page.