""This is really an outlier," Fallon McNulty, executive director of NCMEC's CyberTipline, told Bloomberg. The CyberTipline is where many types of US-based companies are legally required to report suspected CSAM. "Having such a high volume come in throughout the year begs a lot of questions about where the data is coming from, and what safeguards have been put in place.""
""We take a deliberately cautious approach to scanning foundation model training data, including data from the public web, to identify and remove known [child sexual abuse material] and protect our customers," an Amazon representative said in a statement to Bloomberg. The spokesperson also said that Amazon aimed to over-report its figures to NCMEC in order to avoid missing any cases. The company said that it removed the suspected CSAM content before feeding training data into its AI models."
"The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The "vast majority" of that content was reported by Amazon, which found the material in its training data, according to an investigation by . In addition, Amazon said only that it obtained the inappropriate content from external sources used to train its AI services and claimed it could not provide any further details about where the CSAM came from."
The National Center for Missing and Exploited Children received more than one million reports of AI-related child sexual abuse material (CSAM) in 2025. The vast majority of those reports originated from Amazon, which identified the content within its training data but said it obtained the content from external sources used to train AI services and could not disclose further details. NCMEC's CyberTipline said Amazon's filings lacked disclosable source information and were largely inactionable, while reports from other companies contained actionable leads. Amazon said it scans and removes known CSAM from training data, over-reports to avoid missing cases, and removed suspected content before model training. Safety concerns for minors have intensified in the AI industry.
Read at Engadget
Unable to calculate read time
Collection
[
|
...
]