A PYMNTS Company

Bipartisan Senate Bill Targets Minors’ Access to AI ‘Companions’

 |  October 31, 2025

In a rare burst of bipartisanship, a group of senators representing both parties introduced a bill on Tuesday to limit how AI chatbots interact with children. The legislation, called the GUARD Act, would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    “In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” said Sen. Richard Blumenthal (D-CN), a co-sponsor of the measure. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”

    In addition to Blumenthal, sponsors of the bill include Sens. Josh Hawley (R-MO), Katie Britt (R-AK), Mark Warner (D-VA) and Chris Murphy (D-CN).

    The bill’s introduction follows high-profile incidents in which teens harmed themselves after interacting with an AI chatbot. The parents of 16-year-old Adam Raines recently filed a lawsuit against OpenAI after their son took his own life after discussing suicide with his AI “companion.”

     “In recent months, we have seen multiple alarming incidents involving chatbots—from encouraging suicide to engaging in sensual conversations with minors—that should give every American pause,” Sen. Britt said. “It is our job in Congress to do everything we can to protect our children as we navigate the new reality of living in a world where AI is increasingly integrated into our lives.”

    Read more: EU’s AI Act Brings Antitrust Scrutiny to the Heart of Artificial Intelligence Governance

    According to a recent study by Common Sense Media, 72% of teens have used an AI companion at least once. Roughly one in three teens have “used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice,” the study said. About one in three AI companion users reported feeling uncomfortable with something an AI companion has said or done, and have discussed important or serious matters with AI companions instead of real people.

    “It’s alarming to see AI chatbots contributing to incidents of self-harm among young people,” Sen. Warner said. “Congress and the tech industry can’t afford to wait until more kids are hurt or more lives are lost. This bipartisan legislation will make sure clear guardrails are in place to protect kids from manipulative or dangerous chatbot interactions.”

    In the wake of the Raines and other widely reported incidents, OpenAI announced updates to its default ChatGPT model based on input from mental health professionals. On Monday it disclosed that around 1.2 million people out of 800 million weekly users, including adults as well as minors, discuss suicide with ChatGPT each week. That represents 0.15% of weekly active users who have conversations that include explicit indicators of potential suicidal planning or intent, OpenAI said.

    The GUARD Act would require AI companies to perform and verify age verifications for all new accounts and bar minors from accessing companions. It would also make it a crime to “design, develop, or make available an artificial intelligence chatbot, “ knowing  that it poses a risk of “soliciting, encouraging, or inducing minors” to engage in sexually explicit conduct, or that it “encourages, promotes, or coerces,” suicide or self-harm.