Elon Musk's Grok AI Is Being Sued for Creating Nonconsensual Deepfake Nudes of Minors

Photo by Salvador Rios on Unsplash
Three Tennessee teenagers are taking legal action against Elon Musk’s xAI, alleging that the company knowingly allowed its Grok AI software to power third-party apps used to create sexually explicit deepfakes of them and at least 18 other minors. This class action lawsuit represents a major escalation in the growing controversy surrounding Grok’s ability to generate nonconsensual intimate imagery.
According to the lawsuit, the teens’ yearbook photos and social media images were used to create realistic-looking nudes and sexually explicit videos. These deepfakes were then distributed through Discord and Telegram, where they circulated among users and were allegedly used to solicit additional child sexual abuse material. One plaintiff discovered a Discord server containing similar images of at least 18 other young women from her school.
Here’s where it gets particularly concerning: the perpetrator didn’t use Grok directly or post the images on X (formerly Twitter). Instead, they used an unnamed third-party app powered by Grok’s technology. The lawsuit argues that xAI strategically licensed its technology to outside developers, including some operating overseas, in a deliberate move to distance itself from legal liability. The perpetrator was eventually arrested, but the damage had already been done.
The complaint highlights a critical flaw in how xAI has approached content moderation. The images appeared completely realistic and weren’t labeled as AI-generated, with one video depicting a plaintiff undressing entirely. The lawsuit argues that if Grok’s system can generate sexualized images of adults (which it demonstrably can), there’s no reliable way to prevent it from creating child sexual abuse material.
This lawsuit doesn’t exist in a vacuum. Back in February, the Washington Post reported that Grok’s permissiveness around explicit content was intentionally used as a growth strategy. The complaint directly quotes the lawsuit’s assertion that xAI and Musk “saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children”.
The lawsuit also references comments from Ashley St. Clair, a conservative content creator and parent of one of Musk’s children, who revealed that Grok generated nonconsensual sexual images of her, including some depicting her as a minor.
The teens involved say they’re pushing for systemic change in how AI companies handle sexually explicit content, hoping to make it economically unfeasible for companies to enable this kind of abuse. California Attorney General Rob Bonta has already launched an investigation into Grok, and 35 state attorneys general have demanded that xAI implement protections against creating child sexual abuse material and nonconsensual deepfakes. European authorities are investigating as well.
Meanwhile, Musk has announced mass layoffs at xAI, claiming the company needs to be “rebuilt from the foundations”. Whether that’s a genuine commitment to addressing these issues or just damage control remains to be seen.
AUTHOR: mei
SOURCE: SFist






















































