A PYMNTS Company

Connecticut AG Puts Businesses on Notice: Old Laws Still Apply to AI

 |  April 16, 2026

Connecticut Attorney General William Tong has issued a sweeping advisory clarifying that businesses deploying artificial intelligence systems remain fully subject to the state’s existing legal framework—even in the absence of a comprehensive, AI-specific statute. The guidance, as analyzed by Squire Patton Boggs, underscores a central message for compliance officers and in-house counsel: AI does not operate in a regulatory vacuum.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The advisory, directed to state agencies and private-sector stakeholders alike, outlines how Connecticut’s civil rights, privacy, data security, consumer protection and antitrust laws apply to AI system development and use. It also signals enforcement priorities and encourages residents to report AI-related harms to the Office of the Attorney General.

    At its core, the advisory reframes AI governance as an extension of existing obligations rather than a novel regulatory domain. Businesses using AI in high-stakes contexts, such as hiring, lending, housing, insurance and healthcare, are reminded that both federal and state anti-discrimination laws remain fully enforceable. The attorney general emphasizes that algorithmic decision-making does not insulate companies from liability where outcomes result in unlawful bias or disparate impact.

    Privacy and data security requirements are another focal point. The Connecticut Data Privacy Act (CTDPA) applies squarely to personal data used in AI systems, including training datasets and model outputs. Companies must comply with core obligations such as data minimization, consumer notice, consent mechanisms and data protection assessments. Notably, amendments to the CTDPA taking effect July 1, 2026 will require businesses to disclose whether they use personal data to train large language models—an explicit recognition of AI-specific risks within an existing statutory framework.

    The advisory also highlights operational complexities unique to AI, particularly around data deletion and downstream use. Businesses that acquire datasets from third-party brokers must ensure that proper notice was provided to consumers at the point of collection. Moreover, any retroactive material change in data use—such as repurposing data for AI training—triggers updated notice requirements and the opportunity for consumers to withdraw consent.

    Beyond privacy, Connecticut’s general data security and breach notification statutes apply to AI deployments. Companies must safeguard personal information used in AI systems and promptly notify affected individuals in the event of unauthorized access or disclosure.

    Read more: Maine Set to Become First State to Ban AI Data Centers

    Consumer protection enforcement is another key pillar. The Connecticut Unfair Trade Practices Act (CUTPA) provides broad authority to police deceptive or unfair uses of AI, including misleading advertising generated or facilitated by automated systems. The statute carries significant enforcement tools, including civil penalties, injunctive relief and a private right of action for affected consumers.

    Similarly, the Connecticut Antitrust Act applies to AI-driven market behavior. The advisory warns against the use of algorithms to coordinate pricing, allocate markets or otherwise engage in anti-competitive conduct—whether in AI products themselves or in downstream markets.

    Importantly, per Squire, the attorney general situates this guidance within a broader enforcement trajectory. The advisory references prior actions targeting the misuse of algorithms to create addictive design features for minors and to reinforce monopolistic dynamics in sectors such as search, mobile ecosystems and ticketing.

    While Connecticut lawmakers are considering several AI-related bills,  including measures addressing data brokers, algorithmic pricing and automated employment decisions, those proposals remain in early stages and are unlikely to be enacted in the current legislative session.

    For now, the message from Connecticut regulators is unambiguous. Companies cannot wait for bespoke AI legislation before addressing risk. Existing laws already impose substantive compliance obligations on AI systems, and enforcement authorities are prepared to act accordingly.