BoE and FCA Favor a Light Touch Approach to AI Regulation

The Bank of England and the Financial Conduct Authority released their final report on the Artificial Intelligence Public-Private Forum, where regulators discussed the benefits and challenges of applying AI techniques, particularly in the financial sector. The report doesn´t indicate any specific public policy to follow. Still, the overall message is that a light-touch approach may be the preferred option to foster innovation when it comes to AI regulation. 

This report explores the various barriers to adoption, challenges and risks of using AI in financial services and how to address such barriers. Divided into three sections, data, model risk and governance, the discussion paper weighs how companies and regulators can work together to minimize risks. 

Perhaps the most interesting part for regulatory purposes is governance. “Governance is crucial to the safe adoption of AI in financial services,” said the report. 

Read more: Final UK AI Public-Private Forum Report Says Governance ‘Crucial’

The regulators suggest that existing governance frameworks provide a good starting point for AI models and systems as most relevant financial services already use data governance and operational risk management models. AI won´t change these in the short term. Yet, when AI models start to operate “unsupervised,” it is likely that new risk management and governance frameworks will be needed.  

As in other countries, the BoE and the FCA propose that high-risk and high-impact AI use-cases will require more due diligence and supervision. But even for these cases, the approach is to divide accountability and responsibility among different decision-makers in the company. For instance, companies need to clearly define the relevant roles and lines of accountability at all stages of the AI governance hierarchy. This includes allocating responsibilities across the firm, from the design and development of AI models to the business areas that use models, the compliance teams that oversee the risk management of those models, up to senior managers and board members. 

Transparency and Bias 

These two areas are important for regulators and even though no mention of regulation is made in the report, these issues are sensitive, and it is where we could see some guidance or recommendations in the near future. 

Transparency refers to two groups. One is for developers, compliance teams and regulators and the second is for consumers. For the first group, the discussions between the industry and regulators are that using standards to communicate how AI works, like SHAP values or LIME plots, maybe a way forward. For consumers, transparency is about how specific decisions are made, although there isn´t any suggestion about how this should be done. 

Bias is probably a real and urgent concern to deal with. Bias may be embedded in aggregated data or processed. Regulators understand that biases may come from different sources, human decisions, data or processes. “It is the unintended biases and unintended outcomes that should be of concern. So it is important to have frameworks and decision-making processes that distinguish between ‘good’ and ‘bad’ bias,” said the report. 

The paper suggests that regulators should make sure that the companies they supervise evaluate bias and fairness appropriately by defining criteria for measuring bias in models. The recommendation is to ensure companies have good compliance programs and a good culture to eliminate biases, but it stops short of recommending more intrusive measures. This would risk “being bound to what is known today and losing sight of what changes might come in the future, not only in how data are classified but also on how data are collected.” In this case, the risk of overregulating AI, even to reduce possible biases, may have a detrimental effect on AI innovation and a softer approach, like monitoring, is a preferred option. 

While this report isn´t intended to be a policy roadmap, it provides a good overview of how UK regulators see AI as an opportunity to foster innovation with minimal risks to consumers if managed adequately. Even where risks are identified, the proposed way forward is to enhance communication between companies and regulators and increase transparency rather than proposing new regulations. However, this cannot be ruled out. 

Read more: UK Seeks Its Place to Shape Global Standards in Artificial Intelligence 

Sign up here for daily updates on the legal, policy and regulatory issues shaping the future of the connected economy.