跳到主要内容
信息

"Informed AI News" is an publications aggregation platform, ensuring you only gain the most valuable information, to eliminate information asymmetry and break through the limits of information cocoons. Find out more >>

AI-Generated Child Abuse Material Surges, Challenging Tech Regulation

Generative AI is fueling a surge in online child sexual abuse materials (CSAM). The UK's Internet Watch Foundation (IWF) reports a 17% increase in AI-altered CSAM since fall 2023. Deepfake content, using real victims' imagery, is rampant. One forum shared 3,512 explicit images and videos in 30 days, mostly of young girls. Offenders share advice and AI models fed by real images.

IWF CEO Susie Hargreaves warns, "Without proper controls, generative AI tools provide a playground for online predators." The technology is advancing rapidly, creating more lifelike synthetic videos. Of 12,000 AI-generated images on a dark web forum, 90% were realistic enough to be assessed as real CSAM.

Apple faces allegations of underreporting CSAM shared via its products. The National Society for the Prevention of Cruelty to Children (NSPCC) claims Apple was implicated in 337 offenses in England and Wales alone between April 2022 and March 2023, while reporting only 267 cases worldwide to the National Center for Missing and Exploited Children (NCMEC).

Google reported over 1.47 million CSAM cases to NCMEC in 2023. Facebook removed 14.4 million pieces of child sexual exploitation content between January and March this year. Despite these efforts, the battle against online child exploitation intensifies with the misuse of generative AI.

Full article>>