A PYMNTS Company

The Hidden Security Risk Inside Your Company’s AI Tools 

 |  March 13, 2026

Every time an employee types a question into an AI chatbot at work, something happens that most people don’t think about. The question gets saved; the AI remembers it. And in many cases, the company has no clear plan for when or whether to delete it.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Multiply that by thousands of workers across thousands of companies, and a significant new security problem starts to take shape.

    This is the core warning from a new analysis by Brooks Kushman, a law firm that advises companies on technology and intellectual property. The firm says two specific problems now represent the most urgent security threats in corporate AI: the indefinite storage of AI data, and weak controls over who can access AI systems in the first place.

    The data problem is more widespread than most executives realize. When workers use AI tools they often upload sensitive materials without realizing those files are being retained. That could include client records, financial data, legal strategies, or trade secrets. By default, some AI platforms even use those interactions to train their models, unless the company actively opts out.

    The result, according to Brooks Kushman, is a growing attack surface. The more data a company holds, the more there is to steal, and regulators increasingly want to know what organizations are doing to limit that exposure.

    The second problem is about access, specifically, who (or what) is allowed to use an AI system and what they can do with it. Traditional corporate software limits users to specific tools and data sets. With AI, a single user with too many permissions can pull information from across an organization, generate new content from it, and share that output widely.

    The situation becomes more complicated when the “user” is not a human but an AI agent able to work independently, make decisions, and interact with other systems. According to Brooks Kushman, those agents need to be treated like any other privileged employee.

    Related: Investors Are Rethinking Government Tech as AI Rewrites the Rules  

    “AI security is no longer just about protecting models. It is about controlling data, defining access, preserving evidence, and ensuring accountability across complex, evolving systems,” the firm wrote.

    To address the access problem, Brooks Kushman recommends a system called Role-Based Access Control, or RBAC — a formal framework that defines exactly what each person, and each AI agent, is permitted to do within a company’s systems. Under RBAC, a developer would have different permissions than a manager, and an AI agent running automated tasks would be restricted to only the systems it absolutely needs.

    There are legal risks as well. A recent federal court ruling in United States v. Heppner held that conversations with a publicly available AI tool are not protected by attorney-client privilege. If a lawyer or executive runs sensitive legal analysis through a consumer AI product, that conversation could show up in court. The ruling puts pressure on companies to use enterprise-grade AI platforms with formal security commitments, not free consumer tools.

    Looking ahead, the pressure is only going to increase. The EU AI Act, a wave of U.S. state privacy laws, and sharper scrutiny from federal regulators are all pushing in the same direction. Companies need to show they have real governance structures around their AI systems, not just good intentions. Brooks Kushman says the organizations that get ahead of this now, by tightening data retention policies, building proper access frameworks, training employees on what they can and cannot share with AI, will be in a far stronger position than those that wait.