A PYMNTS Company

Financial Firms Embrace AI Tools and Face New Compliance Tests 

 |  January 30, 2026

In boardrooms across financial services, the pressure to “use more tech” is no longer abstract. It’s urgent. AI can surface patterns humans miss. Cloud tools can cut costs and speed up launches. New computing models promise breakthroughs. But every one of those gains comes with a familiar question that now has sharper edges: if a tool helps you decide faster, who is responsible when the decision goes wrong?

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    That is the tension running through new guidance from Herbert Smith Freehills Kramer on decision-making in modern financial services. The firm’s core point is simple: technology is expanding the amount and variety of information leaders can use, which can produce better calls. But it can also magnify risk, especially regulatory risk, if governance does not keep up.

    Herbert Smith frames the challenge as an exercise in “staying within the lines.” Adopt tools that improve outcomes, while meeting supervisory expectations that have not gone away just because the inputs are now digital. The authors focus on three areas where this balance is getting harder: AI agents, cloud-based AI, and the “near yet far” reality of quantum computing.

    For AI agents, the warning is not that regulators are anti-AI. It’s that regulators expect firms to understand what the systems are doing, and to manage the risks that come with speed and scale.

    The guidance notes that AI can improve tasks like credit assessment by analyzing more data, faster, but a flawed model can also amplify losses across a wider book of business. It also lays out practical pitfalls, from “black box” outputs that are hard to explain, to biased training data, to dependence on third-party providers outside a regulator’s perimeter.

    Related: Apple Buys Israeli AI Audio Startup Q.ai in Undisclosed Deal

    That leads to the principle Herbert Smith wants decision-makers to take personally: “When planning to leverage technology in their processes, decision-makers must apply robust due diligence.”

    The guidance connects that principle to what regulators are already signaling. Germany’s BaFin, the authors note, expects decisions to be explainable, and pushes back on models that cannot show how they reached an output. Singapore’s MAS emphasizes transparency and explainability, and points to heightened oversight for higher-risk use cases. In the UK, the Senior Managers and Certification Regime effectively forces a clear owner for material AI uses, with an expectation that the responsible executive understands the models and inputs well enough to evaluate risk.

    On cloud-based AI, the guidance argues the upside is real—scalability, efficiency, reduced in-house costs—but so is the risk profile, especially when sensitive data sits in infrastructure you do not control. The authors point to the pace of adoption: Hong Kong’s monetary authority has said cloud-related projects represent about 80% of reportable technology outsourcing initiatives by banks, with a meaningful share touching critical systems. They also emphasize the basics regulators keep returning to: cyber hygiene, third-party AI risk, and who can access critical systems.

    Finally, the paper looks ahead to quantum computing. The technology may deliver competitive advantages, but Herbert Smith notes policymakers are concerned it could also stress today’s security foundations, pushing firms toward “quantum-safe” cryptography planning.

    What comes next, the authors suggest, is more scrutiny—not less—as adoption accelerates. Firms will face continued expectations around documentation, post-deployment reviews, and monitoring once systems become business-as-usual. And they should be ready for oversight that tests whether governance is keeping pace with technology-driven decision-making, particularly where consumer impact, outsourcing, and explainability intersect.