Thousands of AI-generated images depicting real victims of child sexual abuse threaten to “overwhelm” the internet, a watchdog has warned.
The Internet Watch Foundation (IWF), the UK organisation responsible for detecting and removing child sexual abuse imagery from the internet, said its “worst nightmares” have come true.
The IWF said criminals were now using the faces and bodies of real children who have appeared in confirmed abuse imagery to create new images of sexual abuse through artificial intelligence technology.
The data published by the organisation said the most convincing imagery would be difficult even for trained analysts to distinguish from actual photographs, and some content was now realistic enough to be treated as real imagery under UK law.
The IWF warned that the technology was only improving and would pose more obstacles for watchdogs and law enforcement agencies to tackle the problem.
The research comes ahead of the UK hosting the AI safety summit next week, where world leaders and tech giants will discuss the developing issues around artificial intelligence.
In its latest research, the IWF said it had also found evidence of the commercialisation of AI-generated imagery, and warned that the technology was being used to “nudify” images of children whose clothed images had been uploaded online for legitimate reasons.
In addition, it said AI image tech was being used to create images of celebrities who had been “de-aged” and depicted as children in sexual abuse scenarios.
In a single month, the IWF said it investigated 11,108 AI images which had been shared on a dark web child abuse forum.
Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse...We have now passed that point
Susie Hargreaves, IWFOf these, 2,978 were confirmed as images which breached UK law and 2,562 were so realistic it said they would need to be treated the same as if they were real abuse images.
Susie Hargreaves, chief executive of the IWF, said: “Our worst nightmares have come true. Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers. We have now passed that point.
“Chillingly, we are seeing criminals deliberately training their AI on real victims’ images who have already suffered abuse.
“Children who have been raped in the past are now being incorporated into new scenarios because someone, somewhere, wants to see it.
“As if it is not enough for victims to know their abuse may be being shared in some dark corner of the internet, now they risk being confronted with new images, of themselves being abused in new and horrendous ways not previously imagined.
“This is not a hypothetical situation. We’re seeing this happening now. We’re seeing the numbers rise, and we have seen the sophistication and realism of this imagery reach new levels.
“International collaboration is vital. It is an urgent problem which needs action now. If we don’t get a grip on this threat, this material threatens to overwhelm the internet.”
The IWF said it feared that a deluge of AI-generated content could divert resources from detecting and removing real abuse, and in some instances could lead to missed opportunities to identify and safeguard real children.
Read MoreMore than 500 potential cyber attacks logged every second, BT says
ChatGPT and other chatbots ‘can be tricked into making code for cyber attacks’
Tinder adds Matchmaker feature to let friends recommend potential dates
Google and Meta withdraw from upcoming Web Summit
‘Game-changing’ facial recognition technology catches prolific shoplifters
Facial recognition firm Clearview AI overturns UK data privacy fine