A PYMNTS Company

AI Agents Are Raising New Questions of Fraud and Privacy Liability

 |  February 12, 2026

Courts are beginning to answer a critical question for the digital economy: when autonomous AI agents act on a user’s behalf, who bears legal responsibility under statutes written decades before such systems existed?

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    A recent analysis of litigation trends by Babalakin & Co. highlights how judges are applying two established technology statutes — the federal Computer Fraud and Abuse Act (CFAA) and the California Invasion of Privacy Act (CIPA) — to disputes involving so-called agentic AI systems. For businesses building or deploying these tools, the early case law offers concrete compliance signals.

    The CFAA, enacted in 1986 as an anti-hacking statute, imposes liability for accessing a protected computer “without authorization” or in a manner that “exceeds authorized access.” Its scope has been narrowed by recent precedent.

    In Van Buren v. United States, the U.S. Supreme Court held that “exceeds authorized access” applies only when a user accesses off-limits areas of a computer, not when they misuse data they are otherwise entitled to obtain. And in hiQ Labs Inc. v. LinkedIn Corp., the Ninth Circuit concluded that scraping publicly available data from a site without authentication barriers does not violate the “without authorization” prong.

    By contrast, in Facebook Inc. v. Power Ventures Inc., the Ninth Circuit found liability where the defendant circumvented IP blocking measures after receiving a cease-and-desist letter.

    These precedents frame the dispute in Amazon.com Services LLC v. Perplexity AI Inc., currently pending in the Northern District of California. Amazon alleges that Perplexity’s AI agent, Comet, accessed nonpublic pages of Amazon’s platform using customer credentials while bypassing technical barriers and bot-detection measures. Perplexity argues the agent acted at the direction of authorized account holders.

    The case crystallizes a central issue for agentic AI: does user authorization suffice, or can platforms define the scope of permissible delegation? If courts accept the theory that only the platform may authorize automated access, companies that design agents to operate inside credential-gated environments may face CFAA exposure even when end users provide login credentials.

    Per B&C, Businesses developing AI agents that interact with third-party platforms should treat terms of service and technical controls as enforceable boundaries.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    CIPA, a 1967 California wiretapping statute, has become a focal point in privacy litigation involving digital technologies. Section 631(a) imposes liability on third parties that intercept or learn the contents of communications without the consent of all parties.

    Related: AI Agents Can Now Shop and Pay; Regulators Race to Catch Up  

    Recent cases suggest that AI vendors can face exposure if their systems are deemed to function as third-party listeners rather than neutral tools.

    In Ambriz v. Google LLC, plaintiffs challenged Google’s AI-powered Cloud Contact Center technology, alleging that it enabled Google to analyze and use call data without proper consent. The court applied the “capability test,” asking whether Google had the technological ability to use communication data for its own purposes. The court found that contractual language reserving data-use rights plausibly established such capability.

    Similarly, in Taylor v. ConverseNow Technologies Inc., the court held it was plausible that an AI voice assistant provider had the capability to use customer data for independent purposes, based in part on representations in its privacy policy.

    Bottom line, per B&C: AI vendors that rely on customer communications to train models, improve products or refine advertising models face heightened CIPA risk in California and similar all-party consent states. Structuring services as pure processors — with clear technical and contractual limits on independent data use — may reduce exposure, though this can conflict with product improvement strategies.

    For legal practitioners advising AI developers, B&C highlights several compliance recommendations:

    • Align product design with access controls. Avoid engineering agents to circumvent authentication or anti-bot measures. Technical architecture decisions may later be characterized as evidence of unauthorized access.
    • Harden contractual language. Terms of service should precisely define data collection, delegation authority and automated access policies.
    • Reassess data rights clauses. Broad reservations of rights to use communication data may trigger CIPA capability analysis.
    • Strengthen consent mechanisms. Clear disclosures and affirmative consent can mitigate risk under both CFAA and CIPA by reinforcing authorization and consent defenses.

    Based on the initial case law, courts are treating agentic AI not as legally novel, but as a new factual context for established doctrines. For companies racing to deploy autonomous agents, the message is direct. Product functionality, data flows and contractual positioning are no longer purely engineering choices. They are litigation risk variables embedded in legacy statutes that remain fully operative in the age of AI.