A PYMNTS Company

Federal Judge Rules AI Chatbot Conversations Can Be Seized as Evidence in Fraud Cases 

 |  February 16, 2026

If you’ve ever typed a sensitive question into ChatGPT or Claude, hoping to get a handle on a legal problem before calling your lawyer, a new court ruling should give you pause. A federal judge in Manhattan has decided that conversations with AI chatbots do not get the same legal protections as conversations with your attorney. That could have big consequences for executives, companies and anyone who uses AI to think through legal trouble.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The ruling came down on Feb. 10, from U.S. District Judge Jed Rakoff, one of the most prominent judges in the Southern District of New York. The case involved Bradley Heppner, a former financial services executive facing federal securities fraud charges. Before his arrest, Heppner had used Anthropic’s Claude to research the government’s investigation into his conduct and assess his potential legal exposure.

    According to a detailed analysis from law firm McGuireWoods, Heppner typed prompts into Claude that included facts he had learned from his own lawyer. The AI model generated written responses. When federal agents arrested Heppner in November and searched his Dallas home, they seized his electronic devices and found roughly 31 documents made up of those AI prompts and outputs.

    Heppner’s defense team argued those materials should be off-limits. They said he created them to prepare for meetings with his attorney and later shared them with counsel. The government disagreed and asked the court to rule that the documents were fair game. Rakoff sided with the government.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    The judge’s reasoning came down to three main points. First, an AI chatbot is not a lawyer. The attorney-client privilege covers private communications between a person and their attorney for the purpose of obtaining legal advice. Typing questions into Claude does not meet that standard. Claude itself warns users to consult a “qualified attorney.”

    Related: Italy Fines AI Chatbot Maker Replika €5 Million Over Privacy Violations

    Second, the conversations were not truly private. Consumer AI platforms like Claude may retain user data, use it for training, and even share it with regulators or third parties. That undercuts any claim of confidentiality, which is a core requirement for privilege. Third, the legal concept known as “work product” protection didn’t apply either. That doctrine covers materials prepared by a lawyer or at a lawyer’s direction in anticipation of a lawsuit. Heppner acted on his own. His defense team admitted as much.

    As McGuireWoods put it in their analysis: “If the defendant had instead conducted Google searches or checked out certain books from the library to assist with his legal case, the underlying searches or library records would not be protected from disclosure simply because the defendant later discussed what he learned with his attorney.”

    That comparison is striking. It essentially puts AI-generated legal research on the same footing as a Google search or a trip to the library — useful, but not shielded from investigators.

    There is one important caveat. The ruling leaves open the question of what happens when AI is used under a lawyer’s direct supervision, on a secure enterprise platform with strict privacy protections. That’s a very different scenario from someone independently querying a consumer chatbot. McGuireWoods noted that the government itself acknowledged the analysis “might be different” if counsel had directed the AI use.

    For companies, the takeaway is clear. As more executives and compliance teams turn to AI tools to analyze regulatory risk and organize facts ahead of legal consultations, they need to understand that those interactions could become evidence in a future proceeding. AI platforms are not considered trusted advisers under the law. They’re tools — powerful ones, but with real disclosure risks. How companies structure and govern their AI use may soon matter just as much as what they use it for.