Bank Regulators Warn That Traditional Rules Also Apply to AI

Federal Reserve

Financial regulators have yet to publish a comprehensive set of artificial intelligence (AI)-related rules, as most of the efforts have focused so far on guidelines and principles for financial institutions to make good use of AI. However, senior banking regulator officials warned on Friday (June 3) that existing regulation could also apply to AI.

Even without AI-specific guidance from federal bank regulators, institutions could violate existing rules, officials from the Federal Reserve and the Office of the Comptroller of the Currency (OCC) said on Friday in an event hosted by the New York Bar Association.

“There’s a lot of existing guidance,” said Kevin Greenfield, the deputy comptroller for operational risk policy at the OCC.

U.S. financial regulators haven’t yet issued a set of rules to guide banks and other institutions’ growing use of AI, a technology that has been deployed in applications as diverse as fraud monitoring, product pricing and loan applications. Customer protection agencies such as the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB) have issued press releases warning about the adverse effects of algorithmic biases. In the case of the FTC, the agency has also taken enforcement actions limiting a company’s use of certain algorithms. The Financial crimes Enforcement Network (FinCEN) and the OCC have publicly encouraged companies and banks to use AI in their processes as the benefits outweigh its risks but also warn of potential pitfalls. Yet, none of them have issued any sweeping guidance in the form of formal rules.

Nonetheless, David Palmer, a senior supervisory financial analyst at the Federal Reserve, said the lack of guidance shouldn’t prevent banks from having risks management practices in place.

“If they don’t have the appropriate governance, risk management and controls for AI, they shouldn’t use AI,” Palmer said in the same panel discussion.

Expressing his personal views, Palmer suggested that banking regulators may never come up with a comprehensive set of AI-related rules and instead the agencies might issue regular and small updates of their positions as the technology and the markets advance.

Federal Reserve regulators have begun to examine how institutions are using AI, at this point mostly as a way of educating the regulators, rather than to produce specific outcomes, Palmer said.

“We want to confirm that our institutions [are] using it the right way, they have the proper governance, risk management controls over it,” he said.

Greenfield of the OCC sent a similar message when he testified before the Congress on May 13. The OCC has the tools to intervene if banks’ use of AI is not properly managed, but the agency’s approach for the moment is to focus on high-risk activities and to rely on banks to develop safe AI tools, he said.

He also warned financial institutions that despite not having specific AI rules, banks should ensure that the new AI tool complies with current rules, particularly fair lending and other consumer protection requirements. For instance, if the data set used by an algorithm is biased, or some of the elements of the algorithm is not accurate, it could yield unfair results.

But the OCC is not only encouraging banks and financial institutions to develop AI solutions for themselves. The agency is also committed to exploring the use of AI to improve insights into its supervisory, policy staff and risk analysis teams as part of the agency’s supervisory system upgrade.

Read more: OCC Encourages Banks to Explore AI Solutions for RegTech