A PYMNTS Company

How To Approach Governance of AI Agents

 |  May 8, 2026

By:  

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    In this piece for Norton Rose Fulbright’s Data Report, authors Susana Medeiros, Steve Roosa & Wenda Tang discuss the growing risks associated with agentic AI systems and argue that current governance models rely too heavily on “Band-Aid” approaches that attempt to control AI behavior after deployment rather than embedding governance directly into system design. The authors advocate for a “trust assurance” framework in which governance, controls, and operational limits are built into the architecture of AI systems from the outset.

    The article criticizes common assumptions made by some AI security vendors, particularly the idea that AI agents should operate with highly dynamic or unpredictable behavior. The authors argue instead that effective agentic AI systems should have clearly defined success and failure modes, tightly controlled workflows, and explicitly limited API access governed by static logic and predefined rules. Allowing unrestricted or probabilistic AI-driven tool usage, they warn, can create cascading failures, silent system misbehavior, and increasingly dangerous exploit chains.

    To illustrate their argument, the authors compare AI governance to aviation safety systems, where automated controls prevent unsafe pilot actions before problems occur. In the same way, they argue that AI agents should be prevented from exceeding predefined operational boundaries rather than relying primarily on after-the-fact detection and remediation. Governance should focus on preventing unsafe behavior during system operation, not simply reacting once failures occur.

    The piece also emphasizes the growing importance of lawyers, compliance professionals, and procurement teams in AI governance. The authors suggest these stakeholders may become the first line of defense in ensuring organizations implement meaningful controls over AI systems. They recommend organizations begin every AI deployment by asking foundational governance questions about trust, system permissions, data usage, operational limits, and risk monitoring in order to minimize unnecessary exposure and prevent uncontrolled scope creep over time…

    CONTINUE READING…