AI Deepfake Porn Scandal Rocks Sydney High School as Police Launch Investigation - Decrypt
01/09/2025 21:12A student in Sydney, Australia, allegedly used AI to create and distribute explicit deepfake images of female classmates as concern over AI misuse intensifies.
A Sydney, Australia, high school student faces police investigation and disciplinary action after allegedly using artificial intelligence to create pornographic deepfake images of female classmates, marking one of Australia's first major AI-related incidents in an educational setting.
A deepfake is a fake image generated using deep learning algorithms. It is highly realistic and, depending on the quality, hard to differentiate from a real photo. They are usually NSFW, but they don’t necessarily have to be.
The male student, who police did not name, allegedly scraped photos from social media accounts and school events to generate explicit AI images of multiple female students. He then distributed the content through fake social media profiles, according to emails sent by school officials to parents as reported by local media.
The New South Wales Police launched an investigation following reports of "inappropriate images being produced and distributed online," according to The Guardian. The police are working with both the eSafety Commissioner and the Department of Education to address the incident.
"The school has been made aware that a year 12 male student has allegedly used artificial intelligence to create a profile that resembles your daughters and others," read the school's email to affected parents, according to local media. "Unfortunately, innocent photos from social media and school events have been used."
This is not an isolated case. About 530,000 teenagers in the U.K. have encountered nude deepfakes, according to a study by London-based non-profit Internet Matters. Last year, local news in Seattle, Washington, reported that a local teenager shared deepfakes of his classmates on social media. The year before that, a group of female students in New Jersey found that their classmates used their fully clothed photos as a base to generate NSFW deepfakes of them.
The problem seems to be spreading as generative AI makes it easier to create basically anything. A survey by the Center for Democracy & Technology found that 50% of U.S. high school teachers know of at least one instance where someone from their school was depicted in AI-generated explicit content.
New South Wales Education Minister Prue Car called the incident "abhorrent" during a Thursday press conference. "There will be disciplinary action for the student," Car said, praising the school's deputy principal for swift action in handling the situation.
The Department of Education emphasized its zero-tolerance stance toward such behavior. "Our highest priority is to ensure our students feel safe," a department spokesperson told The Guardian. The school is providing ongoing wellbeing support to affected students.
This incident follows a similar case in Victoria last June, where a 17-year-old student allegedly created explicit AI-generated images of about 50 female classmates. That student received a police caution after investigation.
Legal experts point to gaps in current legislation for handling AI-generated explicit content. The Australian Senate passed legislation in August 2023 targeting non-consensual deepfake pornography, while advocates in the U.S. push for the Preventing Deepfakes of Intimate Images Act.
Other legal initiatives to tackle this issue include the Deepfakes Accountability Act, Singapore’s anti-deepfakes legislation, and the EU AI Act.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.