A PYMNTS Company

Healthcare AI Is Booming. The Regulations Governing It Are All Over the Map 

 |  February 24, 2026

AI is reshaping how American patients receive care. Algorithms are helping doctors diagnose illness, flag risky medications and deny insurance claims. But as hospitals race to adopt these tools, a patchwork of conflicting state laws is making it difficult for anyone to know what the rules actually are.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    A new analysis from law firm Husch Blackwell, published this week, lays out just how complicated the situation has become. The firm’s healthcare attorneys warn that providers now face a “fragmented and highly varied regulatory landscape from state to state” — one that affects everything from mental health chatbots to AI systems that help doctors decide which patients receive medication.

    The core problem is simple: there is no single national standard for how AI can be used in healthcare. Different states are moving in different directions at very different speeds.

    Some states have gone big. Colorado, Texas and Utah have each passed sweeping laws that cover AI broadly — requiring hospitals and health systems to manage risk, check for bias and be accountable for what their AI systems do. Other states have taken a narrower approach. Arizona, for example, now requires that a human review any insurance denial made with AI’s help. Illinois has specific rules for AI used in mental health therapy.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    The Husch Blackwell report identifies three issues that most states seem to agree on, even if they’re handling them differently. First, discrimination. AI systems can absorb and amplify human biases, leading to unequal care. Colorado’s law is among the strictest in the country on this point, requiring hospitals using “high risk” AI for clinical decisions to conduct detailed impact assessments and take active steps to prevent unfair outcomes.

    Read more: New York Health Plans Cite Antitrust Concerns in Clash Over CDPAP Rate Hike

    Second, keeping doctors in charge. Nearly every state law reviewed by the firm draws a line between AI that supports a physician and AI that replaces one. Texas requires that doctors personally review all AI-generated patient records. Illinois allows AI to recommend therapy treatment plans — but only if a licensed provider signs off first.

    Third, transparency. Patients, the laws agree, deserve to know when AI is involved in their care. Utah requires mental health chatbots to clearly identify themselves as AI. California requires disclosure whenever AI generates clinical communications for a patient. Colorado goes furthest: if AI contributes to a decision that negatively affects a patient’s care, the hospital must explain what data the AI used, what role it played — and provide a process for the patient to appeal.

    The White House has stepped into this debate. In December 2025, President Trump signed an executive order calling for a unified national AI policy. The order directs the Commerce Department to review state AI laws within 90 days and flag any that are considered burdensome to innovation. States whose laws conflict with federal standards could lose access to certain federal funding.

    But the Husch Blackwell attorneys are careful to note that the executive order has limits. “Only Congress has the authority to enact true preemption through legislation,” they write. “The EO cannot invalidate state law.” In other words, hospitals still need to comply with wherever they operate — and those rules are still changing.

    For healthcare providers, the firm recommends starting by taking stock: catalog every AI system in use, figure out which laws apply and build clear internal policies before regulators come knocking. The pace of change, the authors warn, is not slowing down.