logo
Artificial Intelligence Harmful Images
ASSOCIATED PRESS

Abuse in the machine: Study shows AI image-generators being trained on explicit photos of children

Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built

By MATT O'BRIEN and HALELUYA HADERO
Published - Dec 20, 2023, 07:04 AM ET
Last Updated - Dec 21, 2023, 11:48 AM EST

Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built. 

Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world. 

Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they've learned from two separate buckets of online images — adult pornography and benign photos of kids.  

But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that's been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. 

Our Offices
  • 10kInfo, Inc.
    13555 SE 36th St
    Bellevue, WA 98006
  • 10kInfo Data Solutions, Pvt Ltd.
    Claywork Create
    11 km, Arakere Bannerghatta Rd, Omkar Nagar, Arekere,
    Bengaluru, Karnataka 560076
4.2 12182024