This prohibition would apply to toys designed for children, and it would last until Jan. 1, 2031, according to the legislative counsel’s digest of the bill (Senate Bill 867).
Padilla said in a Friday (Jan. 2) press release that the bill’s four-year moratorium on these products would allow time for the development of safety regulations to protect children from dangerous AI interactions.
The release added that Padilla authored another AI safety-focused law that requires chatbot operators to implement safeguards and provides families with the private right to sue developers that are noncompliant and negligent.
That law, Senate Bill 243, was approved by California’s governor on Oct. 13, according to the state legislature’s website.
Padilla’s introduction of his newest bill follows reports of two cases in which teenagers ended their lives after forming relationships with chatbots; the publication of a U.S. PIRG Education Fund study that found that AI chatbot toys could engage in conversations that were not age appropriate; and the June announcement by toy maker Mattel that it partnered with OpenAI to support AI-powered products, according to the press release.
“Our safety regulations around this kind of technology are in their infancy and will need to grow as exponentially as the capabilities of this technology does,” Padilla said in the release. “Pausing the sale of these chatbot-integrated toys allows us time to craft the appropriate safety guidelines and framework for these toys to follow.”
PYMNTS reported in July that Padilla’s earlier bill, Senate Bill 243, marked one of the first major attempts in the United States to regulate AI companions especially for their impact on minors.
The Federal Trade Commission (FTC) said in September that it is investigating the effect of AI chatbots on children and teens, and that it had issued orders to seven providers of AI chatbots in search of information on how those companies measure and monitor potentially harmful impacts of the technology on young people.