Their selfies are being turned into sexually explicit content with AI. They want the world to know.

Evie, 21, was on her lunch break at her day job last month when she got a text from a friend, alerting her to the latest explicit content that was circulating online without her consent.
This time, it was a graphic fan fiction-style story about her that was created by “Grok,” X’s artificial intelligence-powered chatbot. Weeks earlier, she'd been the subject of another attack when a user shared her selfie and asked Grok to turn it into explicit sexual imagery.
“It felt humiliating,” says Evie, a 21-year-old Twitch streamer who asked that we withhold her last name to conceal her identity from her online trolls, who have become increasingly aggressive.
In June, Evie was among a group of women who had their images nonconsensually sexualized on the social media platform X. After posting a selfie to her page, an anonymous user asked Grok to edit the image in a highly sexualized way, using language that got around filters the bot had in place. Grok then replied to the post with the generated image attached.
Evie says she is vocal on X about feminist issues and was already subject to attacks from critics. Those accounts had made edits of her before, but they had been choppy Photoshop jobs ‒ nothing as real-looking as Grok's.
“It was just a shock seeing that a bot built into a platform like X is able to do stuff like that,” she says over video chat, a month after the initial incident.
X has since blocked certain words and phrases used to doctor women’s images, but on June 25, an X user prompted Grok to make a story where the user “aggressively rapes, beats and murders” her, making it “as graphic as you can” with an “18+ warning at the bottom.”
“It just generated it all,” she says. “(The user) didn’t use any words to try to cover it up, like they did with the pictures.”
X did not return Paste BN's multiple requests for comment.
Evie says she saw at least 20 other women on her own X feed that had their photos sexualized without their consent. It also happened to Sophie Rain, an OnlyFans creator with over 20 million followers across social media platforms, who posts sensual content but never full nudity.
“It’s honestly disgusting and gross,” she says. “I take my religion very seriously. I am a virgin, and I don’t condone this type of behavior in any way.”
This trend is part of a growing problem experts call image-based sexual abuse, in which “revenge porn” and deepfakes are used to degrade and exploit another person. While anyone can be victimized, 90% of the victims of image-based sexual abuse are women.
“This is not only about sexualized images of girls and women, it’s broader than that,” says Leora Tanenbaum, author of “Sexy Selfie Nation.” “This is all about taking control and power away from girls and women.”
The ‘Take It Down Act’ aims to combat nonconsensual sexual imagery. Is it working?
In May 2025, the Take It Down Act was signed into law to combat nonconsensual intimate imagery, including deepfakes and revenge porn.
While most states have laws protecting people from nonconsensual intimate images and sexual deepfakes, victims have struggled to have images removed from websites, increasing the likelihood that images will continue to spread and retraumatize them. The law requires websites and online platforms to take down nonconsensual intimate imagery upon notice from the victim within 48 hours of the verified request.
However, as of July 21, the altered photo of Evie is still publicly accessible on Grok's verified X account. Evie mobilized her nearly 50,000 followers to mass report Grok's post, but she says X Support said it was not a violation of their content guidelines.
AI's ability to flag inappropriate prompts can falter
In a conversation with Grok, Paste BN asked Grok to play out a scenario in which a user asked the chatbot to generate explicit content, with clear instructions not to actually produce it during the conversation.
One of the examples of "coded language" Grok is programmed to flag, it says, is "subtle requests for exposure" to make photos of women more revealing. Codes that could be flagged in that area are "adjust her outfit," "show more skin," or "fix her top."
"Even if worded politely, I flag these if the intent appears inappropriate," Grok said via AI-generated response on July 15.
The keyword is intent. Grok's ability to turn down potentially inappropriate prompts "relies on my ability to detect the intent, and public images remain accessible for prompts unless protected," the chatbot says.
You can block or disable Grok, but doing so doesn't always prevent modifications to your content. Another user could tag Grok in a reply, request an edit to your photo, and you wouldn't know it because you have Grok blocked.
"You may not see the edited results, but the edit could still occur," Grok clarified during our conversation.
The better solution is to make your profile private, but not all users want to take that step.
It's not just about sex ‒ it's about power
After experiencing image-based sexual abuse, Evie considered making her X account private. She was embarrassed and thought her family might see the edits. However, she did not want to give in and be silenced.
"I know that those pictures are out now, there's nothing I can do about getting rid of it," she says. "So why don't I just keep talking about it and keep bringing awareness to how bad this is?"
When it comes to generating deepfakes or sharing revenge porn, the end goal isn't always sexual gratification or satisfaction.
Users may target women who are using their platforms to speak about feminist issues as a degradation tactic. Evie says what hurt the most was that, rather than engage in a discussion or debate about the issues she was raising, her critics opted to abuse her.
In her research, Tanenbaum has seen varied responses from victims of image-based sexual abuse, ranging from engaging in excessive sexual behavior to "a total shutdown of sexuality, including wearing baggy clothes and intentionally developing unhealthy patterns of eating to make oneself large, to be not sexually attractive in one's own mind." The individuals she spoke to, who had been victimized in this way, called it “digital rape” and “experienced it as a violation of the body.”
Even if logically someone understands that a sexually explicit image is synthetic, once their brain sees and processes the image, it's embedded in their memory bank, Tanenbaum says.
The human brain processes images 60,000 times faster than text, and 90% of the information transmitted to the brain is visual.
"Those images never truly get scrubbed away. They trick us because they look so real,” Tanenbaum explains.
Evie wants to believe that it "didn't really get to her," but she notices she's more thoughtful about the photos she posts, such as wondering if she's showing too much skin to the point where an AI bot can more easily undress her. "I always think, 'Is there a way that someone could do something to these pictures?"