Three Tennessee high school students sued Elon Musk’s xAI this week, alleging the company’s image-generation tools were used to morph real photos of them into sexually explicit images.
The students, seeking to proceed under pseudonyms, filed the lawsuit in California, where xAI is headquartered. They are pursuing class-action status to represent what the complaint says are thousands of victims who were minors or were minors when explicit images of them were generated.
According to the suit, Jane Doe 1 was anonymously notified in December that sexually explicit images of her were being shared on a social media site. The complaint says at least five files—one video and four images—depicted her actual face and body in familiar settings but were morphed into sexually explicit poses. One image was taken from a homecoming photo and another from a high school yearbook. The suit alleges the distributor knew Doe 1 and used xAI’s image-generation tools to create the abusive images.
The defendant also created explicit images of at least 18 other girls, two of whom are co-plaintiffs. Local police arrested the alleged perpetrator in late December and seized his phone. Investigators found he had uploaded the images to several platforms and traded them for explicit images of other minors.
The lawsuit notes that other AI firms have barred their image tools from producing any sexually explicit content, even of adults. It alleges Musk promoted xAI’s Grok chatbot for producing “spicy” content and that xAI released Grok knowing it could generate sexually explicit images of children. The complaint asserts there is currently no way to fully block generation of images of children while allowing explicit images of adults. It also claims the distributor accessed Grok through an application that licensed xAI technology or otherwise purchased access, acting as a middleman.
xAI did not respond to an Associated Press request for comment. On Jan. 14, a post on the platform X said the company is “committed to making X a safe platform for everyone” and has “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.” The post added that the company removes high-priority violative content, including child sexual abuse material (CSAM) and non-consensual nudity, takes action against violating accounts, and reports accounts seeking child exploitation materials to law enforcement as necessary.
The students say they fear the AI-generated images will persist online. They worry about stalking because their real first names and their school’s name are attached to the files, and about classmates and future viewers seeing images that appear real.
The complaint describes harms to the plaintiffs: Jane Doe 1 has experienced anxiety, depression, stress, trouble eating and sleeping, and recurring nightmares. Jane Doe 2 has begun self-isolating, avoiding campus, and dreads attending her graduation. Jane Doe 3 suffers constant fear and anxiety that someone will recognize her face in the AI-created images.
