TW: this article discusses deepfake abuse
It started with a DM.
‘Erm... Is this you?’ someone who followed me wrote, alongside a link to a TikTok video.
I opened it in a panic, because ‘Erm… Is this you?’ doesn’t exactly sound like the start of something good. The video showed me in conversation with someone.
“Women who don’t have sex deserve to be cheated on,’ I said into the microphone. A purposely inflammatory and deeply misogynistic line, designed to shock and stop the scroll. Clickbait, essentially. My jaw dropped: it was my face, my voice, my mannerisms. It was me… Only, it wasn’t me.
In the video, ‘I’, a ‘women’s health specialist’ was promoting a supplement that essentially promised to fix every single possible symptom of menopause. It was a sales pitch, using my likeness and voice to promote a product I had never heard of, let alone endorsed. It was a deepfake that had been AI-generated and totally unauthorised.
I felt sick to my stomach. It’s a uniquely modern kind of horror, watching yourself say something you never said, in a video you never filmed.
Deepfakes have been bubbling up in the cultural conversation for a while now, but I’d assumed it was the kind of thing that only happened to celebrities and other people in the public eye with far larger audiences than me. But it turns out that if you have a public-facing persona and enough video content online, you’re fair game.
The scariest part? It was eerily realistic. I sent the video to my mum, who was confused and called me straight away: ‘Why did you say that?’ she asked. She had picked up on my voice sounding ‘strange’ but without any knowledge of deepfakes or the power of AI-generated content, it hadn’t occurred to her that this wasn’t actually me.
Who else had seen it and believed it was me? How many people had watched it, judged me, maybe unfollowed me, without questioning its authenticity? Was there more out there? More fake videos using my face, my voice and my likeness to sell products, push narratives or worse? I spiralled. I felt violated, scared and unsafe. This had happened without my consent, knowledge, control or power. If a follower hadn’t spotted it and brought it to my attention, I’d still be none the wiser.
I reported it immediately, of course, along with all of the other videos on the account - mostly other deepfakes – and the account itself. I posted the video on Instagram to alert my audience and unfortunately, I soon received the following message: ‘Omg!!!! I thought it was weird when you said women’s health specialist but I literally put this supplement in my basket!!’ I was gutted that people were being scammed.
And it’s still out there: despite being reported hundreds of times, by me, by followers, by friends, it’s still up. The account that posted it remains active.
Which brings us onto the issue of responsibility of social media platforms. The product that digital me was shilling was available to purchase directly from the video in TikTok shop, meaning the platform receives commission from each purchase. They are, directly or indirectly, profiting from this fraudulent content – and having ignored the hundreds of flags, it feels like maybe this just isn’t a priority for them.
Or perhaps their moderation policies are simply lagging woefully behind the capabilities of AI. A case of ‘the internet is fast; regulation is slow’? Either way, when someone’s face and voice can be stolen and weaponised, and the platform hosting the content doesn’t intervene, it sends a chilling message: we don’t care who gets hurt, as long as people keep watching.
And look: I’m well aware that this is a ‘mild’ case. It’s a fake ad, a misused identity and a bit of reputational concern. What about the women experiencing the far more sinister versions? This tech is already being used to create explicit deepfake pornography, often without the subject’s knowledge. People’s faces are being lifted from social media and seamlessly stitched onto someone else’s body. The effect is disturbingly real and we’ve seen it happen to celebrities like Scarlett Johannson. But what if this horrifying trend moves beyond women in the public eye to those who aren’t - women who don’t have the platform, legal team or sufficient public sympathy to fight back?
Deepfakes could be the latest evolution of digital misogyny, and the idea that anyone with an online presence could be dragged into this without warning is not just deeply unsettling; it’s dangerous.
The UK is introducing legislation which will criminalise the creation of sexually explicit deepfakes without consent, which is a big and much-needed step. But what about non-sexually explicit deepfake content? There needs to be better regulation that prevents the creation of deepfake content and better reporting systems that prevents the distribution. But platforms must also act faster in removing them – this is key.
There’s something we all need to do, too: question what we consume. If a video feels off or if a tagline seems too outrageous, pause. Check sources. We’re all part of the digital ecosystem, and the more critically we engage with what we’re shown, the less power these technologies have to deceive, exploit and harm us.
Sadly, we can’t afford to be passive. We’re at a critical point in time where it feels like technology might be outpacing humanity, but awareness, curiosity and compassion can be our resistance.