Adobe's Sexualized AI Images at LA Elementary School Expose California's Weak AI Safeguards

Photo by Siarhei Horbach on Unsplash
When a fourth grader at Delevan Drive Elementary School in Los Angeles used Adobe Express for Education to create a book cover for a Pippi Longstocking report, things went sideways fast. Instead of generating an image of the red-haired Swedish character with braids, the AI tool produced sexualized images of women in lingerie and bikinis. One parent, Jody Hughes, discovered the issue and quickly found other students had experienced the same problem. Welcome to “Pippigate”, a scandal that has exposed just how unprepared California schools are to handle AI’s darker side.
The incident happened just as California’s Department of Education was rolling out new statewide guidelines aimed at preventing exactly this kind of mishap. The guidelines came after the state legislature passed two laws in 2024 demanding the department get a handle on AI’s rapid spread through schools. Adobe claims it fixed the issue within 24 hours of learning about it, but that doesn’t undo the fact that the software was deployed in schools without proper vetting.
Here’s the problem: the new guidelines are vague and don’t go far enough. They urge educators to integrate ethical discussions about AI into classrooms and encourage critical thinking, but they don’t actually tell schools how to do that. They also don’t give clear guidance on how parents and teachers can opt out of using AI tools altogether. Charles Logan, a former teacher now at Northwestern University, points out that the guidelines seem to assume AI adoption is inevitable, which is exactly the problem.
Governor Newsom reinforced this inevitability narrative when he vetoed a bill last year that would have restricted chatbot use by minors, arguing that students need to be prepared for a world where AI is everywhere. But not everyone agrees. Critics say that approach ignores legitimate safety concerns and puts the burden on individual families to navigate technology that’s rapidly evolving and often poorly tested.
California’s majority students of color deserve better protections. Research shows that young Black and Latino people use generative AI more than their white peers, yet face greater risks from AI bias and stereotyping. Julie Flapan, director of the Computer Science Equity Project at UCLA, warns that technological advances often exacerbate inequalities rather than level the playing field.
The fact that even Los Angeles Unified, the state’s largest school district, botched an AI tutor rollout just months before Pippigate suggests this isn’t a problem that’s going away. More safeguards are coming, the Department of Education will release specific policy recommendations by July, and new bills aim to protect student privacy and restrict certain AI applications. But until schools have real tools and training to evaluate AI before it reaches students, incidents like this one will keep happening. The question is: how many more times will we have to learn this lesson before the state actually requires thorough testing?
AUTHOR: tgc
SOURCE: CalMatters



























































