Bank of England Probes AI Threats to UK Financial Stability

Bank of England

The Bank of England is conducting tests to better understand the risks and opportunities artificial intelligence (AI) presents for the financial sector, Sarah Breeden, deputy governor for financial stability at the Bank of England, said in a letter to the U.K. Parliament’s Treasury Committee.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The letter was written in response to a Treasury Committee report that recommended that the Bank undertake AI-specific stress-testing, and it was published by the Committee Thursday (April 16), according to a press release from the Committee.

    According to the letter, the Bank of England is undertaking scenario analysis focused on plausible macroeconomic and core financial market outcomes resulting from investment, development and adoption of AI, as well as potential risks to U.K. financial stability.

    “This scenario analysis will help ensure that a wide range of plausible outcomes arising from AI investment, development and adoption scenarios are encompassed by the Bank’s broader approach to stress testing, including system-wide exercises,” Breeden said in the letter. “In addition, we are working to incorporate AI scenarios into various forms of cyber and operational testing of the financial sector.”

    The Bank of England is also working with its international counterparts on simulation methods to better understand how AI agents trading in financial markets could amplify a stress scenario through correlated behavior or “herding,” Breeden said.

    “This work could also explore how such dynamics could be mitigated, for example through exploring how agents’ objective functions should best take account of public policy objectives,” Breeden wrote in the letter.

    Advertisement: Scroll to Continue

    Treasury Committee Chair Meg Hillier said in the press release that Anthropic’s Mythos AI model and other developments around AI show how fast the technology is moving.

    “It has never been more important that those responsible for maintaining the UK’s financial stability take a proactive approach to understanding and mitigating the risks AI may pose to our financial system,” Hillier said.

    It was reported Sunday (April 12) that British financial regulators convened urgent discussions with the government’s cybersecurity agency and leading financial institutions to evaluate potential risks linked to a new AI model developed by Anthropic.

    Officials from the Bank of England, the Financial Conduct Authority (FCA) and Treasury are working alongside the National Cyber Security Centre to examine vulnerabilities in critical IT systems that may have been exposed by the AI model.

    It was reported Thursday that Anthropic is ready to begin offering its Mythos AI model to British banks as part of its Project Glasswing, which offers select organizations early access to the model.