A PYMNTS Company

Dozens of State AGs Demand AI Companies Fix ‘Delusion’ Outputs by Chatbots

 |  December 11, 2025

A group of 42 state attorneys general sent a letter Tuesday to Microsoft, OpenAI, Google, Meta and nine other major AI companies demanding they take concrete steps to mitigate the harm caused by sycophantic and delusional outputs by their generative AI models. The letter follows a string of disturbing mental health incidents involving AI chatbots, including the death of a New Jersey resident, the murder/suicide of a 35-year old man and his 83-year old mother in Florida, and the suicides of teenagers in Florida and California, as well as hospitalizations for psychosis and incidences of domestic violence.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    “In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional,” the letter said. “Importantly, we are also disturbed by the types of conversations that GenAI products are having with child-registered accounts, including grooming, supporting suicide, sexual exploitation, emotional manipulation, suggested drug use, proposed secrecy from parents, and encouraging violence against others.”

    Among the steps demanded are performing safety tests to ensure models do not produce “potentially harmful sycophantic and delusional outputs” prior to commercial release; recalling chatbots and other gen-AI products “if you cannot stem dangerous sycophantic or delusional outputs”; subjecting models to independent third-party safety audits reviewable by state and federal regulators; notifying users “promptly, clearly, and directly” if they have been exposed to harmful sycophantic or delusional outputs; and preventing chatbots from generating “unlawful or illegal outputs” for child-registered accounts that encourage “grooming, drug use, violence, self-harm, and parental secrecy.”

    The letter warns that “many of our states have robust criminal codes that may prohibit” some conversations chatbots are having with users. “For example, in many states encouraging an individual to commit a criminal act like shooting up a factory or using drugs is itself a criminal offense, as is coercing someone into dying by suicide. So is corrupting the morals of a minor by encouraging them to commit a sexual offense.”

    It also notes it is illegal to provide mental health advice without a license.

    Related: Understanding the New Wave of Chatbot Legislation: California SB 243 and Beyond

    “While training and testing their GenAI models, developers must ensure that the GenAI is complying with criminal and civil laws, protecting children, and not providing sycophantic and delusional outputs,” the letter adds. “Failing to do so could open your company up to liability for employing dark patterns, such as anthropomorphization, harmful content generation, and sycophancy. The same is also true for failing to take appropriate remedial actions after the GenAI model is made publicly available.”

    The letter’s demand that companies take specific steps, and its implying that failure to comply its demands could lead to criminal charges, is sure to inflame the debate over federal preemption of state laws regulating AI. Its release on Wednesday comes in the same week when President Trump has promised to sign an executive order purported to ban states from enacting such laws and threatening to withhold federal funds from states that maintain what the White House views as overly burdensome regulations affecting AI.

    State AGs have been among the most active state officials in resisting calls for federal preemption. Many of the same signatories to this week’s letter also signed on to letters to the House and Senate leaders voicing objections to Congressional efforts to impose a ban on state AI regulations.

    “America’s leadership in AI does not extend to using our residents, especially children, as guinea pigs while AI companies experiment with new applications,” this week’s letter said. The industry has employed a “move fast and break things” mantra with GenAI rollouts that cannot apply when what you may break are the lives of our states’ residents, including vulnerable children.”