A PYMNTS Company

Colorado Eying Possible Do-Over of Landmark AI Law

 |  March 24, 2026

Colorado lawmakers are weighing a sweeping reset of the state’s landmark artificial intelligence law following the release of a proposal from a policy group convened by Gov. Jared Polis (D). The proposal calls for replacing the Colorado Artificial Intelligence Act (CAIA) with a narrower, disclosure-driven framework that aligns more closely with the Trump administration’s emerging priorities.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The March 17 proposal from the Colorado AI Policy Workgroup would “flip the focus” of the state’s approach, shifting away from prescriptive oversight of “high-risk” systems toward consumer transparency and post-decision accountability, according to an analysis by the law firm Hogan Lovells.

    The proposed change comes nearly two years after the CAIA’s passage in 2024, a law modeled in part on the EU AI Act’s risk-based regime. The statute imposed a duty of care on developers and implementers of AI systems used in consequential decisions such as hiring, lending, housing, and healthcare. It also required impact assessments, risk management programs, and consumer appeal rights.

    From the outset, however, the CAIA drew significant opposition from industry groups, which warned it could stifle innovation and disproportionately burden smaller firms. Even key state officials signaled reservations. Gov. Polis, Colorado Attorney General Phil Weiser, and state lawmakers publicly called for revisions shortly after enactment, reflecting concerns that the law’s compliance obligations were overly rigid.

    The new proposal reflects that feedback. It narrows the scope of regulated systems, replacing the CAIA’s “high-risk AI systems” category with “covered automated decision-making technologies” used to “materially influence” consequential decisions. Activities such as advertising, content moderation, cybersecurity, and fraud prevention would largely fall outside the law’s scope.

    More fundamentally, the proposal abandons the CAIA’s risk-management architecture in favor of a transparency-based regime. Developers would no longer be subject to a general duty of care. Instead, they would be required to provide implementers with detailed documentation on system capabilities, limitations, training data categories, and appropriate use. Implementers, in turn, would see most of their compliance obligations eliminated, replaced by recordkeeping requirements and consumer notification duties following adverse decisions.

    The proposal also recalibrates liability. It preserves the absence of a private right of action but clarifies that liability should be allocated based on relative fault rather than joint and several liability. At the same time, enforcement authority would remain concentrated with the state attorney general, with rulemaking limited largely to disclosure requirements.

    Despite these deregulatory elements, the analysis cautions that companies would remain exposed to existing anti-discrimination, consumer protection, and privacy laws. The proposal does not displace those frameworks, leaving open questions about how they will be applied in practice.

    Related: White House Pushes Congress for National AI Law to Override State Rules

    The timing of Colorado’s pivot is closely tied to developments in Washington. The proposal coincides with a White House push for a national AI policy framework that emphasizes innovation, federal primacy, and the avoidance of a fragmented state-by-state regulatory landscape.

    The Trump administration’s framework explicitly calls on Congress to “preempt state AI laws that impose undue burdens” and establish a “minimally burdensome national standard.”  It also discourages the creation of new centralized AI regulators, instead favoring sector-specific oversight and industry-led standards.

    In that context, Colorado’s move away from a European-style risk regime toward a lighter-touch, disclosure-oriented model appears designed in part to mitigate preemption risk. The Hogan Lovells analysis notes that federal agencies, including the Federal Trade Commission (FTC), are expected to clarify how existing consumer protection laws apply to AI, potentially overriding conflicting state requirements.

    There are also financial considerations. Under a recent executive order, states with “onerous” AI regulations could face restrictions on certain federal broadband funding programs, creating additional incentives to recalibrate state laws.

    Even so, the proposal faces an uncertain path. Colorado’s legislative session runs through mid-May, and lawmakers have offered mixed initial reactions. The Workgroup’s recommendations have not yet been formally introduced as legislation.

    The debate underscores a broader inflection point in U.S. AI governance. While early state efforts like the CAIA sought to establish comprehensive guardrails modeled on European regulation, federal policymakers are increasingly signaling a preference for lighter, innovation-focused frameworks. Colorado’s proposed overhaul suggests that even the most ambitious state regimes may now be moving to align with that federal trajectory.