A PYMNTS Company

Legal and Risk Teams Are Struggling to Keep Up With AI 

 |  March 3, 2026

Imagine your company’s AI customer service tool quietly starts making decisions about who qualifies for a product or service. Legal thinks it banned that. Engineering thinks the tool is just advisory. The product team thinks customers were told. Nobody actually mapped who owns the output or who can pull the plug. Hours pass before anyone figures out how to shut it down.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    That scenario is not hypothetical. It’s the kind of mess that a new advisory from law firm Lowenstein Sandler is warning companies to prevent right now, before regulators come knocking.

    The firm published a detailed action plan this week for companies trying to get their AI governance in order, and warning the window to get ahead of this is closing.

    Companies rushed to adopt AI throughout 2025. In doing so, many discovered AI didn’t just introduce new risks, it exposed old ones. Outdated infrastructure, unclear ownership, contracts that never anticipated machine learning. These were problems sitting quietly in the background for years, sometimes decades. AI made them impossible to ignore.

    “AI did not just create new risks; it also acted as a high-speed searchlight, exposing the infrastructure gaps many organizations have carried since the late 1990s,” the firm wrote. “Now, we are closing an era of deferred maintenance.”

    The regulatory pressure is real and building, per Lowenstein. California has already passed a mandatory AI risk framework requiring companies to complete formal AI risk assessments by December 31, 2027. That may sound distant, but Lowenstein’s lawyers argue that building a solid governance program takes 12 to 18 months of serious work, and the clock has already started.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    At the federal level, President Trump’s December executive order established a national AI policy framework. Exactly how far federal rules will reach, and whether they will override state laws, remains an open legal fight. But regulators want answers now.

    Read more: Treasury Issues AI Risk Management Guides for Banks and Fintechs 

    The firm lays out a three-phase plan for companies to follow. The first phase, covering roughly the first three months, focuses on the basics: figure out what AI tools your company is actually using, assign someone responsible for each system, and update your incident response plans. These steps can start immediately.

    Phase two, running from months three through nine, gets more technical. It involves tightening contracts with AI vendors — making sure agreements address who owns training data, who bears responsibility if the model causes harm, and whether the company can audit what the vendor is doing.

    The third phase is about staying vigilant over the long term: monitoring AI systems in production, watching for drift or bias, and reporting regularly to board members and executives.

    According to Lowenstein, regulators are not expecting perfection. But they are expecting evidence of effort. That means having an inventory of AI systems ranked by risk, updated incident response plans that specifically cover AI problems, and a governance committee with a formal charter.

    The recommended framework for all of this is the National Institute of Standards and Technology’s AI Risk Management Framework, known as NIST AI RMF. It has become the closest thing to an industry standard for AI governance in the U.S., and regulators increasingly recognize it.

    Per Lowenstein Sandler, companies that start now — mapping their systems, assigning accountability, tightening their contracts — will be far better positioned than those who wait for a crisis to force the issue.