As the surge in AI-generated child sexual abuse material (CSAM) escalates to unprecedented levels, experts warn that this crisis is only in its early stages. The advent of generative AI is not only flooding the internet with harmful content but also transforming the methods by which children are targeted, survivors are revictimized, and investigators are overwhelmed.
The Internet Watch Foundation (IWF), Europe’s largest hotline dedicated to combating online child sexual abuse imagery, reported a staggering 260-fold increase in AI-generated child sexual abuse videos in 2025, skyrocketing from 13 videos in 2024 to 3,443. Researchers monitoring this alarming trend assert that this drastic rise serves as both a warning and a call to action, indicating merely the tip of the iceberg. “Any numbers that we see, it’s the tip of the iceberg,” said Melissa Stroebel, vice president of research and strategic insights at Thorn, a nonprofit organization focused on building technology to combat online child sexual exploitation.
The proliferation of generative AI tools has made it easier, faster, and cheaper for bad actors to exploit children. Thorn has identified three primary ways these technologies are being weaponized. First, historical abuse survivors are facing renewed victimization. Offenders are using AI to personalize previously circulated images, inserting themselves into scenes of past abuse, which creates a new layer of trauma for individuals who have already suffered significant harm.
“In the same way that you can Photoshop Grandma who missed the Christmas picture into the Christmas picture,” Stroebel remarked, “bad actors can Photoshop themselves into scenes and records of an identified child.” This type of manipulation not only revictimizes survivors but complicates the emotional landscape they have struggled to navigate for years.
The second method involves the weaponization of innocent images. A simple photograph of a child from a school soccer team can be transformed into CSAM using widely available AI tools within minutes. Thorn has also documented peer-on-peer incidents, where young individuals create abusive imagery of classmates, often without a full understanding of the harm they are inflicting.
The third and perhaps most systemic concern is the immense strain placed on reporting pipelines already fraught with challenges. The National Center for Missing and Exploited Children receives tens of millions of CSAM reports annually. With the rapid generation of novel material through AI, investigators face a daunting task: determining whether an incoming image depicts a child currently in danger or if it is an AI-generated creation. “Those are really critical inputs to help them triage and respond to these cases,” Stroebel explained, noting that both scenarios are reported and processed in the same manner by authorities.
This technological shift has rendered some of the most established child safety guidance dangerously obsolete. For years, children have been advised against sharing images online as a precautionary measure against exploitation. However, Thorn’s research reveals a troubling trend: one in 17 young people has personally experienced deepfake imagery abuse, while one in eight knows someone who has been targeted. Victims of sextortion are now receiving fabricated images that closely resemble them, despite the absence of any shared content on their part. “There’s no need for a child to have shared an image any longer for them to be targeted for exploitation,” Stroebel asserted.
On the detection front, traditional hashing technology—akin to a digital fingerprint for known abuse files—fails to recognize AI-generated content, as each synthetically created image is technically new. For instance, altering even a minuscule detail in a well-known photograph, like the Statue of Liberty, can render its digital fingerprint unrecognizable, allowing potentially harmful content to slip through undetected. As a result, classifier technology, which assesses the content of an image rather than matching it to a known file, has become critical for identifying material that would otherwise evade detection.
For parents, Stroebel’s message is urgent and clear: the conversation regarding online safety can no longer be postponed. It must extend beyond traditional warnings. If a child comes forward with concerns, the immediate response should prioritize their safety and well-being. “Our job is, ‘Are you safe, and how do I help you move through to the next step?’” This proactive approach is vital as society grapples with the complexities introduced by generative AI in the realm of child safety and exploitation.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature



















































