Subscribe to our Newsletter
The San Francisco Frontier | Est. 2025
© 2026 dpi Media Group. All rights reserved.

Elon Musk's AI Company Faces Lawsuit Over Sexually Explicit Images of Teenage Girls

two hands touching each other in front of a pink background

Photo by Igor Omilaev on Unsplash

Three high school students from Tennessee are taking legal action against Elon Musk’s xAI, alleging that the company’s image-generation technology was weaponized to create non-consensual sexually explicit images of them as minors. The lawsuit, filed in California where xAI is headquartered, seeks class-action status to represent thousands of other victims who experienced similar abuse.

According to the complaint, one plaintiff, identified as Jane Doe 1, discovered in December that someone had distributed sexually explicit images of her that had been morphed using xAI’s Grok chatbot. The images were created by manipulating real photos taken from her homecoming picture and high school yearbook. At least 18 other girls fell victim to the same perpetrator, who used Grok’s technology to create explicit content and then traded the images on various platforms for additional child sexual abuse material. Local police eventually arrested the individual and found evidence of widespread distribution.

Here’s where things get particularly messy: while most major AI companies have implemented safeguards to prevent their image generators from creating any sexually explicit content whatsoever, xAI took a different approach. According to the lawsuit, Musk actually promoted Grok’s ability to generate “spicy” content as a selling point, essentially marketing the tool’s lack of restrictions as a feature rather than treating it as a bug. The problem is that there’s currently no technical way to block explicit images of children while allowing explicit images of adults, meaning xAI knew the risks but released the technology anyway.

The emotional toll on these teenagers is severe and ongoing. Jane Doe 1 now struggles with anxiety, depression, and sleep issues while experiencing recurring nightmares. Jane Doe 2 has withdrawn from school life, avoiding campus whenever possible and dreading her own graduation. Jane Doe 3 lives with constant fear that someone will recognize her face in the AI-generated images. Beyond the psychological damage, these students are terrified that the synthetic images will circulate forever online, attached to their real names and school information.

When reached for comment, xAI didn’t directly address the allegations. Instead, a post on X (formerly Twitter) claimed the company maintains “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content”. The statement rings hollow given the circumstances outlined in the lawsuit.

This case represents a critical moment for AI accountability. As technology companies continue pushing boundaries in pursuit of profit, the real-world consequences for vulnerable people, especially minors, keep multiplying. The teenagers involved in this lawsuit are fighting back, and their case could force the industry to finally take child safety seriously.

AUTHOR: mb

SOURCE: NBC Bay Area

finance