<https://www.theguardian.com/society/article/2024/jul/22/ai-child-sexual-abuse-videos-iwf-watchdog>

Advances in artificial intelligence are being used by paedophiles to produce 
AI-generated videos of child sexual abuse that could increase in volume as the 
technology improves, according to a safety watchdog.

The majority of such cases seen by the Internet Watch Foundation involve 
manipulation of existing child sexual abuse material (CSAM) or adult 
pornography, with a child’s face transplanted on to the footage. A handful of 
examples involve entirely AI-made videos lasting about 20 seconds, the IWF said.

The organisation, which monitors CSAM around the world, said it was concerned 
that more AI-made CSAM videos could emerge as the tools behind them become more 
widespread and easier to use.

Dan Sexton, chief technology officer at the IWF, said if the use of AI video 
tools followed the same trend as AI-made still images, which have increased in 
volume as the technology has improved and become more widely available, more 
CSAM videos could emerge.

“I would tentatively say that if it follows the same trends, then we will see 
more videos,” he said, adding that future videos could also be of “higher 
quality and realism”.

IWF analysts said the majority of videos seen by the organisation on a dark web 
forum used by paedophiles were partial deepfakes, where AI models freely 
available online are used to impose a child’s face, including images of known 
CSAM victims, on existing CSAM videos or adult pornography. The IWF said it 
found nine such videos.

A smaller number of wholly AI-made videos were of a more basic quality, 
according to the analysts, but they said this would be the “worst” that fully 
synthetic video would be.

The IWF added that AI-made CSAM images have become more photo-realistic this 
year compared with 2023, when it first started seeing such content.

Its snapshot study this year of a single dark web forum – which anonymises 
users and shields them from tracking – found 12,000 new AI-generated images 
posted over a month-long period. Nine out of 10 of those images were so 
realistic they could be prosecuted under the same UK laws covering real CSAM, 
the IWF said.

The organisation, which operates a hotline for the public to report abuse, said 
it had found examples of AI-made CSAM images being sold online by offenders in 
place of non-AI made CSAM.

The IWF’s chief executive, Susie Hargreaves, said: “Without proper controls, 
generative AI tools provide a playground for online predators to realise their 
most perverse and sickening fantasies. Even now, the IWF is starting to see 
more of this type of material being shared and sold on commercial child sexual 
abuse websites on the internet.”

The IWF is pushing for law changes that will criminalise making guides to 
generate AI-made CSAM as well as making “fine-tuned” AI models that can produce 
such material.

The cross-bench peer and child safety campaigner Baroness Kidron tabled an 
amendment to the proposed data protection and digital information bill this 
year that would have criminalised creating and distributing such models. The 
bill fell by the wayside after Rishi Sunak called the general election in May.

Last week the Guardian reported that AI-made CSAM was overwhelming US law 
enforcement’s ability to identify and rescue real-life victims.

Reply via email to