In response to the escalating national concern over the role of technology in moderating illicit content, Democratic Assemblymember Marc Berman from Silicon Valley is spearheading a legislative effort to crack down on AI-generated depictions of child sexual abuse. Berman’s proposed bill, initially reported in California Playbook, seeks to update the state’s penal code, criminalizing the production, distribution, or possession of such material, even if it’s entirely fictitious.
Backing this initiative is Common Sense Media, a nonprofit founded by Jim Steyer, renowned for its longstanding advocacy for cyber protections for children and their privacy. The legislation, if enacted, has the potential to open a new avenue of complaints against social media companies, already under fire for perceived inadequacies in moderating harmful material on their platforms.
Berman’s bill is part of a broader legislative landscape, with at least a dozen proposals set to be considered by California lawmakers this year, all focused on setting limits on artificial intelligence. The move comes as a response to growing concerns surrounding the unchecked proliferation of AI-generated content, particularly those depicting child sexual abuse.
This legislative push builds on a bipartisan law signed by Governor Gavin Newsom last year, which mandated increased efforts by social media platforms to combat child sexual abuse material. The law also granted victims the ability to sue companies deploying features leading to commercial sexual exploitation. Despite facing opposition from influential entities such as the California Chamber of Commerce and tech groups including Technet and NetChoice, the bill passed into law.
Read more: CFPB Begins To ‘Muscle Up’ AI Regulations
The new bill proposed by Berman takes a unique approach by targeting the creators and distributors of AI-generated images rather than explicitly focusing on the platforms hosting such content. However, this shift in focus raises concerns about potential challenges for the tech industry already grappling with broader implications of AI deployment.
Tech industry groups, including representatives from Google, Pinterest, TikTok, and Meta (the parent company of Instagram and Facebook), have expressed reservations about the legislation. They argue that such stringent regulations might inadvertently create a chilling effect in online spaces, raising questions about free expression.
Berman, however, emphasizes the necessity of addressing the troubling trend in the use of AI, particularly in the context of child sexual abuse material. In just one-quarter last year, Meta reported a staggering 7.6 million instances of child sexual abuse material to the National Center for Missing and Exploited Children.
The heart of the matter lies in the fact that AI-generated content depicting minors often relies on scraping information and images from real instances of sexual abuse material, potentially leading to real-life abuse of children. Berman points out that despite encounters with such material, law enforcement agencies in California have been hindered in their efforts to prosecute individuals due to the digitally-manufactured nature of the content.