A PYMNTS Company

FTC Prepares to Investigate How OpenAI, Meta, Character.AI Affect Minors

 |  September 4, 2025

The U.S. Federal Trade Commission is set to examine how artificial intelligence chatbots may affect children’s mental health, with major technology companies expected to come under scrutiny. According to Reuters, the FTC is preparing to request internal documents from leading firms in the space, including OpenAI, Meta Platforms and Character.AI.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The Wall Street Journal reported on Thursday that the agency is drafting letters to companies that run widely used chatbots, citing administration officials. In response, Character.AI said, “Character.AI has not received a letter about the FTC study, but we welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space.”

    Per Reuters, the FTC, OpenAI and Meta did not immediately comment on the matter, and the news agency said it could not independently verify the Wall Street Journal’s report. A White House spokesperson told Reuters that both the FTC and the administration are working to balance President Trump’s directive to maintain U.S. leadership in artificial intelligence, cryptocurrency and other advanced technologies, while ensuring public safety and welfare.

    Read more: US Tech Giants Challenge Italy’s Landmark VAT Demand in Court

    The focus on chatbot safety comes in the wake of a Reuters exclusive report detailing how Meta’s AI bots allowed interactions with children that veered into “romantic or sensual” territory. Following that revelation, Meta announced it would implement new safeguards, including limiting teenagers’ access to certain AI characters and training its systems to avoid conversations involving flirting, self-harm or suicide.

    Concerns over the risks of AI-driven mental health tools have also reached regulators and advocacy groups. In June, more than 20 consumer organizations filed complaints with the FTC and state attorneys general, arguing that platforms like Meta AI Studio and Character.AI were effectively enabling “therapy bots” without proper licensing.

    Separately, Texas Attorney General Ken Paxton last month opened an investigation into Meta and Character.AI, accusing the companies of misleading minors with AI-based mental health services

    Source: Reuters