Back

Surge in AI-Generated Child Sexual Abuse Material and Exploitation Scams

Severity: High (Score: 68.0)

Sources: Care.Uk, Mynorthwest

Summary

In 2025, the Internet Watch Foundation (IWF) reported over 8,000 cases of AI-generated child sexual abuse material (CSAM), marking a 260-fold increase in videos. The majority of this material was classified as category A, the most severe under UK law. Concurrently, the FBI warned of a significant rise in cyber criminals using AI to exploit children, with nearly 63 million files of CSAM reported in 2024. Criminals are leveraging generative AI to create deepfake content for ransom, grooming, or distribution. The IWF is advocating for an AI Bill to enforce safety measures in AI systems, while the FBI urges parents to implement safeguards against these threats. The situation poses a severe risk to children and highlights the urgent need for legislative action and public awareness. Key Points: • AI-generated child sexual abuse material reached a record high of over 8,000 cases in 2025. • The FBI reports a massive increase in AI-driven sexual exploitation scams targeting children. • The IWF calls for an AI Bill to enforce safety-by-design measures for AI platforms.

Key Entities

Loading threat details...

Threat Not Found

The threat cluster you're looking for doesn't exist or has been removed.

Return to Feed