A PYMNTS Company

Understanding the New Wave of Chatbot Legislation: California SB 243 and Beyond

 |  November 21, 2025

By: Justine Gluck (Future of Privacy Forum)

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    In this piece author Justine Gluck (Future of Privacy Forum) dives into California’s newly enacted SB 243, one of the first state laws governing AI-powered companion chatbots and the first to include protections specifically for minors. Passed amid growing concern over youth interactions with emotionally adaptive AI systems, the law requires clear disclosures, safety protocols, and additional safeguards when operators know a user is under 18. Its private right of action has also drawn attention, raising the stakes for potential liability as chatbots become more common in daily life.

    Gluck explains that SB 243 reflects a broader trend: 2025 marked the first year multiple states enacted or seriously considered chatbot-specific legislation, including Utah, New York, California, and Maine. Much of this activity stemmed from high-profile incidents involving harmful chatbot interactions, testimony from affected families, and research showing that companion chatbots are widely used—especially by teens. While studies suggest these systems can offer emotional support, they have also been linked to self-harm risks, prompting lawmakers to focus on transparency, safety measures, and youth protection.

    SB 243 also fits within emerging state-level patterns around disclosure and risk mitigation. Many 2025 bills required developers to clearly notify users that they are interacting with AI and to implement procedures to identify and prevent self-harm content. California’s approach aligns with these themes but goes further for minors, requiring periodic reminders, content restrictions, and crisis-intervention mechanisms. At the same time, Gluck notes that these obligations create new questions around privacy, data retention, and how operators should detect signs of suicidal ideation.

    Looking ahead, Gluck suggests that the momentum behind chatbot regulation is likely to continue into 2026, with companion chatbots at the center of debates about youth safety, mental health, and responsible AI design. As states refine their approaches—balancing innovation with protection—California’s SB 243 may serve as an early model, especially as policymakers weigh whether to maintain transparency-based frameworks or move toward stricter use restrictions…

    CONTINUE READING…