Skip to main content
alert12 March 2026
4 min

AI-Generated CSAM: An Urgent Threat to Children

By Safe Child Guide Editorial Team

The Internet Watch Foundation (IWF) has issued a stark warning: AI-generated child sexual abuse material is proliferating at an alarming rate. In their 2025 annual report, the IWF revealed that they had assessed over 20,000 AI-generated images depicting child sexual abuse — a figure that represents a dramatic increase from previous years. The technology behind this threat has become disturbingly accessible. Open-source AI image generation models can be fine-tuned by anyone with basic technical knowledge to produce photorealistic depictions of child abuse. These images are being created, shared, and traded on dark web forums, encrypted messaging groups, and increasingly on the open internet. This is not a victimless crime. AI-generated CSAM normalises the sexual abuse of children, is used in grooming to lower children's inhibitions, and in some cases is created using real children's faces taken from social media or school photographs. Law enforcement agencies across the UK are clear: creating, possessing, or distributing AI-generated CSAM carries the same penalties as real CSAM. The response is multi-layered. The IWF is developing hash-matching technology to detect and remove AI-generated material at scale. The Online Safety Act places new duties on platforms to proactively detect and remove this content. Technology companies are being urged to build safeguards into AI models to prevent misuse. Parents can help by being cautious about the images of their children that are publicly available online.

Sources

Related safety topics

Frequently Asked Questions