A PYMNTS Company

As Companies Move From AI Testing to Implementation Compliance Takes Center Stage

 |  January 19, 2026

If 2025 was characterized by debate and legislative steps toward AI governance regulations, 2026 will be characterized by concrete enforcement actions and compliance deadlines. That’s according to an analysis of the legal landscape by two attorneys with Baker Donelson published in CPO Magazine.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    With regulators in the EU and U.S. enforcing new governance standards, and courts approaching key decisions in intellectual property cases related to AI, compliance teams will have their work cut out keeping abreast of the new red lines and deadlines, the authors write.

    As of August 2025, providers of certain general-purpose AI (GPAI) systems in the EU have been required to create and maintain detailed technical documentation and make it available to the AI Office; provide detailed summaries of content used in training models; and ensuring policies are in place for compliance with EU copyright and other IP laws.

    Investigation and enforcement of some of those provisions, as well as penalties for non-compliance, however, do not take effect until August 2026.

    While the U.S. has noting comparable to the EU’s comprehensive AI Act, several U.S. states have enacted significant AI regulations, including Colorado, California, Texas, New York and Utah, many of which take effect in 2026, or January 1 2027.

    As of 2025, U.S. Treasury rules have prohibited U.S. persons from investing in foreign entities, particularly in China, that develop AI with potential military or surveillance applications. Baker Donelson advises VCs and private equity clients to “strictly vet portfolio companies for exposure to restricted foreign AI development.”

    Another emerging area of risk involves agentic AI liability. Questions have been raised as to whether a user is bound by a disadvantageous contract executive by an AI agent. Courts have also begun scrutinizing whether users or developers bear liability for autonomous errors. Baker Donelson advises clients review vendor contracts for AI agents to ensure indemnification clauses specifically address autonomous actions and hallucinations that result in financial losses.

    The authors also note that the U.S. Federal Trade Commission and Justice Department, as well as the U.K.’s Competition and Markets Authority have begun investigating certain “pseudo-mergers” in which incumbent tech companies hire a startup company’s leadership and license its IP to evade Hart-Scott-Rodino merger reviews. If such moves are found to harm competition or monopolize compute resources, those agreements could be unwound and the acquiring company penalized.

    The use of AI and algorithmic tools in hiring and other practices are also poised for greater scrutiny by regulators in 2026, according to Baker Donelson. The use of resume-screening algorithms that haven’t undergone bias audits can lead to class-action exposure under Title VII and the Age Discrimination in Employment Act of 1967 (ADEA), the authors warn. Organizations are advised to require third-party bias audits where required by law for any automated employment decision tools used in their human resources departments.

    In light of the new rules and risks, the Baker Donelson advisory recommends the following steps for general counsels and compliance officers:

    • Taking inventory of AI assets across your organization, including all shadow AI use, to ensure comprehensive governance and compliance;
    • Review all vendor agreements and update them if necessary to shift liability for IP infringement and autonomous errors to AI providers;
    • Adopt the rule of the strictest state regulation in crafting compliance programs and continuously monitor state developments;
    • Establish internal incident-response protocols for cases of AI-related errors or hallucinations, and for regulatory inquiries.