Detecting AI-generated child sexual abuse material: evaluating the potential of deep learning algorithms for differentiating between synthetic and real media

Researchers

Dr Rachel Berry, Medicine & Health, UNSW
Dr Jo Plested, UNSW Canberra
Professor Michael Salter, Arts, Design & Architecture, UNSW
Associate Professor Xanthé Mallett, QLD Centre for Domestic and Family Violence
Associate Professor Amber McKinley, Charles Sturt University
Mr Adam Powderly, NSW Police

Funding

This project was part of the Australian Human Rights Institute’s 2025 seed funding round, receiving $5,000.

Summary

In the age of emerging technologies and artificial intelligence (AI), it is increasingly difficult to separate the real from the fake. Modern digital tools have enabled offenders to exploit emerging technologies, including AI and deepfake techniques, to generate highly realistic child sexual abuse material (CSAM). Child sexual abuse (CSA) is a major issue in Australia, affecting nearly 1.5 million Australians before the age of 15. The Internet Watch Foundation reported a sharp increase in AI-generated CSAM in 2023-2024 on both dark web forums and commercial websites. The Office of the eSafety Commissioner has identified generative AI to be a growing threat, though the true volume of AI-generated CSAM in circulation online is unknown. 

AI-generated material perpetuates harms and normalises the production and viewing of synthetic exploitative content. Alarmingly, AI-generated CSAM is increasingly depicting severe abuse, with offenders using AI to generate new imagery of known victims. Its proliferation raises major human rights and child rights concerns as it violates the right to privacy and to protection from sexual exploitation and sexual abuse. 

A major concern is the ability to identify and support real victims of CSA. This project will advance our capacity to detect AI-generated CSAM through a comprehensive evaluation of deep learning algorithms. Existing AI technologies aimed at recognising, preventing and disrupting CSAM are either ineffective or lack real-world validation. This project intends to address this knowledge deficit, by assessing the effectiveness of deep learning algorithms to detect AI-generated CSAM and producing recommendations for best practice AI-generated CSA detection. By refining detection methods, this research aims to improve resource allocation, ensuring that law enforcement and child protection agencies can effectively identify and safeguard children from sexual abuse and exploitation.