Schools navigate the new world of explicit AI-generated images

Deepfake photos at Issaquah High School prompt review of policies and procedures.

Within the past year, schools in Washington and around the country have been navigating unprecedented actions of male students taking photos of female classmates, then using artificial intelligence (AI) technology to superimpose nude images onto their bodies.

These images have spread through schools and social media. The incidents have also uncovered the varying degrees of readiness of school districts in addressing the use of these new exploitative AI apps — and a push for the modernization of state and federal laws to safeguard adults and minors.

Issaquah High School became a target for these deepfake images last fall, and related police report revealed the school’s lack of structure or familiarity around policies and procedures when it comes to student scandals.

According to a report from the Issaquah Police Department, a 14-year-old male student superimposed breasts and genitalia onto photos — taken from social media and school events — of at least six 14- to 15-year-old female classmates and allegedly a school official.

The images, shared via Snapchat and viewed at school, were reportedly spread to a sister school, Liberty High School.

The police report documented interviews with student witnesses who claimed the male student showed them “step by step” how to create the explicit AI-generated images, also known as deepfake. Another witness reported the student was planning on making more images.

According to the Issaquah High School principal, the school staff became aware of the then-rumored images on Oct. 18, 2023. The school began to reach out to parents whose daughters were possibly involved, but continued to keep the investigation internal, according to documents.

In an email, Issaquah Police Chief Paula Schwan described the case as having an “interesting timeline,” noting the police first received information about the incident on Oct. 20, 2023, from a parent.

The parent filed a “sex offense report” after receiving little information from the school regarding potential explicit deepfake images of their daughter. According to the report, two additional parents followed suit and contacted the Issaquah Police Department.

The parents who first reported the incident were “worried that such an image would ruin [their daughter’s] reputation or hinder her goals and or career aspirations,” the police report said.

Once the police called the school, the assistant principal said she spoke to the student, who admitted to creating some of the explicit AI-generated images. However, the assistant principal found no explicit images on the student’s phone and mentioned the school was “still looking into the situation.”

On Oct. 23, 2023, a police detective proceeded with a followup call to the school in which the assistant principal said she “would not provide the documents related to their investigation, and it was only available through a search warrant,” according to the report.

During the call, the detective questioned why the school official had not reported the incident to the police. In response, she asked, “What was she supposed to report?”

The detective informed her that she was a mandatory reporter, and that “reporting incidents of sexual abuse to law enforcement is mandatory,” according to the report. The detective informed the school official that as a mandatory reporter, she is required to report incidents of sexual abuse to law enforcement, according to the report.

“Per our legal team, we are not required to report fake images to the police,” an Issaquah School District spokesperson told the Issaquah Reporter in an email.

Mandatory reporting

Current laws regarding mandatory reporting to law enforcement primarily focus on cases of physical and sexual abuse and neglect.

Although the law addresses sexual exploitation, no statute addresses explicit fabricated images — creating ambiguity within the law.

“[The school official] advised she needed a couple more things done before she could get to reporting the crime and that this whole thing went down on Friday, so there was no time to report anyhow,” according to the detective’s narrative in the report. “I followed up asking when she interviewed [the male student] and she stated Thursday afternoon.”

The report documents staff became aware of possible images on Wednesday of that week. Although the staff had not seen the images on Wednesday, the Issaquah School District’s Child Abuse, Neglect, and Exploitation Prevention Procedure states: “Personnel need not verify that a child has in fact been abused or neglected. Any conditions or information that may reasonably be related to abuse or neglect should be reported.”

While the obligation to report the incident to law enforcement may be unclear because of outdated laws, the school district’s procedure encourages staff to contact Child Protective Services (CPS) to discuss the circumstances to determine whether a report should be made.

​​During student interviews, the principal and assistant principal introduced conflicting interview procedures to the police, leading to setbacks and turmoil between police and school officials, as detailed in the report.

Following student interviews conducted by detectives on Oct. 24, 2023, they reiterated the school officials’ duty to report the incident.

That afternoon, the assistant principal filed a report to CPS — six days after school staff learned about the explicit deepfakes.

According to the report, the male student was temporarily expelled and has since returned to school.

When questioned about the possibility of reassessing or revising procedures, policies or the code of conduct, the Issaquah School District spokesman referred to the student handbook in an email.

“Any form of bullying, harassment, or mistreatment among our students is entirely unacceptable and goes against our values in the Issaquah School District,” the spokesman said.

The spokesman added: “We offered support and guidance to students who were affected. After the incident occurred, we shared our empathy and support for the students and families involved in the situation.”

Other examples

Other schools in the country have tackled similar issues.

While the school has not disclosed the exact number, multiple 10th-grade girls at Westfield High School in New Jersey reported that male classmates used an AI app to generate explicit images of them over the summer.

Once informed in October 2023, the high school began an immediate investigation, said a Westfield School District spokesperson in an email. The Westfield Police Department and the school resource officer were promptly notified and involved throughout the investigation.

The spokesman said the Westfield High School principal emailed the “school community” the same day, and students were offered individual and group support from the administration and school counselors.

However, since the incident, Westfield High School families told the Guardian that “the district has done little to publicly address the doctored images or update school policies to hinder exploitative AI use.”

When questioned about the potential reassessment or revision of procedures, policies, or the code of conduct, Westfield School District Superintendent Dr. Raymond González said the district has measures to prevent such incidents from occurring on school networks and devices.

“We continue to strengthen our efforts by educating our students and establishing clear guidelines to ensure that these new technologies are used responsibly in our schools and beyond,” he said.

In February, the Beverly Vista Middle School administration in California received notification that five male students had created explicit deepfakes of 16 female classmates. The school reported the incident to law enforcement and notified parents in a letter.

The school promptly removed the five male students, and the Beverly Hills Unified School District Superintendent Michael Bregy told NBC they would be punished in accordance with the district’s policies — later deciding to expel all five students.

California enacted laws against non-consensual sexually explicit deepfakes in 2020, enabling victims to sue creators and distributors.

However, legal uncertainty persists for students at Beverly Vista Middle School, as the law specifies photos to be pornographic.

The president of the Cyber Civil Rights Initiative, Mary Anne Franks, told NBC that not all nude depictions of children are legally considered pornographic, further complicating the situation.

Modernizing legislation

At least seven states have enacted laws around explicit fabricated images or added the term to existing laws and definitions, according to the Washington State Standard.

Four other states are also looking to advance bills against these images, including New Jersey.

Last month, Washington began addressing the creation and distribution of explicit fabricated images, with Gov. Jay Inslee signing a bill into law.

The bill broadened criminal penalties under child pornography laws to include explicit fabricated images of minors. The bill also amended the Uniform Civil Remedies for the Unauthorized Disclosure of Intimate Images Act, enabling adult victims the chance to pursue civil litigation.

“We have a responsibility to protect Washingtonians from this type of abuse in all forms. With this law, survivors of intimate and fabricated image-based violence have a path to justice and must no longer tolerate being harmed,” said Rep. Tina Orwall (D-Des Moines), who sponsored the bill.

Even with the expansion under certain laws, legislation for mandatory reporting has yet to include clarity behind reporting explicit deepfakes.

Certain school districts facing these incidents have not yet updated their policies or procedures to hinder the creation, viewing and distribution of explicit deepfakes or establish specific protocols.

Without updated policies and procedures, prompt investigations and decisive actions are unlikely, posing potential risks to the safety and well-being of students. Research from Harvard revealed victims of explicit AI-generated images often experience emotions such as fear, shame, anxiety and guilt. Every participant in the study portion of the research depicted the experience as causing “irreparable harm.”