More and More Women Are Facing the Scary Reality of Deepfakes

Image may contain City Town High Rise Urban Building Office Building Metropolis Architecture and Skyscraper
Jamie Stoker

Someone sends you a message: “You need to see this, I’m sorry,” followed by a link. What pops up is your own face, looking back at you as you engage in acts of hardcore pornography. Of course, it’s not really you. But it is your likeness; an image of you that has been mapped onto a video of someone else using AI technology. This is what’s known as a “deepfake.” It’s happening across the globe—to actors, politicians, YouTubers and regular women—and in most countries, it's entirely legal.

In 2017, a Reddit user called “deepfakes” made a thread where people could watch fake videos of “Maisie Williams” or “Taylor Swift” having sex. Before the thread was shut down, eight weeks later, it had amassed 90,000 subscribers.

According to cybersecurity firm Sensity, deepfakes are growing exponentially, doubling every six months. Of the 85,000 circulating online, 90 percent depict non-consensual porn featuring women. As for the creators, a quick look at the top 30 on one site reveals deepfakers all over the world, including in the U.S., Canada, Guatemala and India. Of those who list their gender, all except one are male.

Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appeared to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

These videos may be fake, but their emotional impacts are real. Victims are left with multiple unknowns: who made them? Who has seen them? How can they be contained? Because once something is online, it can reappear at any moment.

The silencing effect

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” where women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and she stopped writing.

Then there’s Indian journalist Rana Ayyub. Anti-establishment in her views, Rana is used to receiving hate on social media, but it’s never stopped her from speaking out. But in 2018, someone made a deepfake to discredit her and it went viral, shared among important political circles in India in which she was involved. Like Helen, Rana now self-censors.

“I used to be very opinionated,” she wrote in an opinion piece at the time. “Now I’m much more cautious about what I post online… I’m someone who is very outspoken, so to go from that to this person has been a big change.”

Meanwhile, deepfake “communities” are thriving. There are now dedicated sites, user-friendly apps and organized “request” procedures. Some sites allow you to commission custom deepfakes for $30, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalized,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes—for $700 a video. 

This has to end. But how?

In 2018, it seemed as if deepfakes would die out when social media platforms banned them after public pressure. But continued inaction by lawmakers has emboldened this community. Then came the pandemic, giving creators and viewers more time online to exploit women’s misery while working-from-home moderators have been even slower to act.

Time for change

Right now, we’re at a pivotal moment as a paradigm shift in internet accountability is underway. Despite heavy lobbying from technology companies (Facebook is estimated to have spent roughly $17 million lobbying politicians in 2019) there are two new pieces of legislation, the EU’s Digital Services Act and the U.K.’s proposed Online Harms bill, which will hold platforms responsible for the content they host.

But although these changes create frameworks that could potentially outlaw them, they won’t actually cover deepfakes. Is this because deepfakes only really seem to affect women? When it comes to sexual material relating to minors, we have the necessary laws in place. In the Online Harms white paper, abuse against women is not listed as a major “harm” (though uploads from prison is) and in the EU’s proposals, the word “women” only appears once.

In the U.S., several deepfake laws are in the cards, but they mostly focus on political candidates during election campaigns, thus failing to address “100 percent of the problem,” says lawyer and founder of anti-online abuse campaign EndTAB Adam Dodge.

When it comes to the perceived gravity of deepfakes, there are myriad viewpoints. Tim Berners-Lee, known as the inventor of the world wide web, believes the “crisis” of gendered abuse online “threatens global progress on gender equality.” There are also many people out there who don’t see deepfakes as a big deal because, according to law professor Mary Anne Franks, people also make deepfakes for “entertainment, or voyeurism, or profit… because they simply don’t see that person as a human being.” And depressingly our laws don't yet challenge this.

That’s why a coalition of survivors and advocates including Helen and Gibi have started the campaign #MyImageMyChoice, calling for legal changes worldwide. Broadcasting survivor stories from around the world, from Germany to Australia, #MyImageMyChoice is pushing for a joined-up global human-rights solution to the problem. These are women who will no longer be silenced. “We’re taking control of the issue,” says Helen.

So far, more than 45,000 people have signed their petition asking the U.K. government to create world-leading intimate image abuse laws. There’s hope: at the start of March, the U.K. Law Commission published a consultation paper with testimonies from #MyImageMyChoice. Now, campaigners want these ideas put into practice, not just in the U.K. but worldwide.

Porn is a major driver for new technology and who knows what new weapons of gendered violence are being birthed in forums across the internet right now. So, we need future-proof laws that tackle the problem at the root—which is, as Gibi says, “online consent.” We need to convince governments to take the violation of their citizens’ privacy seriously. We need to set an example to the next generation about what they can and cannot do online. And we need to do this now—while lawmakers are grappling with the future of the internet.

If we don’t act on deepfakes, we face a future where women’s faces are traded and their voices muted. We do not accept this future.