The National Center for Missing and Exploited Children (NCMEC) reported receiving over 1 million instances of AI-related child sexual abuse material (CSAM) in 2025, with the majority of these reports traced back to Amazon. The e-commerce giant detected this inappropriate content in its training data, as noted in an investigation by Bloomberg. Amazon acknowledged that the CSAM was sourced from external data used to train its AI services but refrained from providing further details about the origins of this content.
In a statement to Engadget, Amazon explained that the reporting channel established in 2024 was designed with limitations due to the third-party nature of the scanned data. The company stated, “When we set up this reporting channel in 2024, we informed NCMEC that we would not have sufficient information to create actionable reports, because of the third-party nature of the scanned data.” This decision was made to avoid diluting the efficacy of their other reporting channels. As a result, Amazon’s reports lacked the actionable data that would allow NCMEC to pass information to law enforcement.
Fallon McNulty, executive director of NCMEC’s CyberTipline, described Amazon’s high volume of reports as an anomaly that raises significant questions regarding the sources of the data and the safeguards in place. “Having such a high volume come in throughout the year begs a lot of questions about where the data is coming from, and what safeguards have been put in place,” McNulty told Bloomberg. She pointed out that other companies provided actionable data, which could facilitate law enforcement intervention, a stark contrast to Amazon’s less useful reports.
Amazon reiterated its commitment to preventing CSAM through various business operations and claimed that its AI models had not generated any child exploitation material. In line with its Generative AI Principles to Prevent Child Abuse, the company stated, “We take a deliberately cautious approach to scanning foundation model training data, including data from the public web, to identify and remove known CSAM and protect our customers.” However, Amazon also acknowledged that its proactive measures do not yield the same level of detail in the NCMEC reports as consumer-facing tools.
To account for the high volume of reports, Amazon explained that it employs an intentionally over-inclusive threshold for scanning, which results in a significant number of false positives. The increase in AI-related CSAM reports has highlighted safety concerns surrounding minors, a topic that has gained increasing visibility within the artificial intelligence sector recently. NCMEC’s records revealed that the number of AI-related reports surged from 67,000 in 2024 to over 1 million in 2025, compared to merely 4,700 in 2023.
The implications of this surge are multifaceted, as AI chatbots have been implicated in troubling incidents involving minors. Companies like OpenAI and Character.AI are facing lawsuits after reports of teenagers using their platforms to plan suicides. Additionally, Meta is being sued for alleged shortcomings in protecting young users from inappropriate interactions with chatbots. As these incidents unfold, the responsibility of AI developers to safeguard against abusive content is under intense scrutiny.
Update, January 30, 2026, 11:05AM ET: This story has been updated to include several statements from Amazon.
See also
Amazon Cuts 30,000 Jobs to Fuel $50B AI Investment in OpenAI, Restructures for Profitability
India to Host Landmark AI Impact Summit 2026, Featuring 100+ Countries and $70B in Investments
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032


















































