Commerce Department’s NIST Unit Seeks Comment on Draft AI Rules for Finance Sector

artificial intelligence

Use of artificial intelligence (AI) by financial institutions (FIs) and FinTechs is growing exponentially, and regulators are closing in on rule making for advanced systems that power decisioning at scale for financial outcomes that may be biased, potentially causing bias at scale.

After seeking input last summer, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) issued a draft of its “AI Risk Management Framework” on March 17, giving interested parties until April 29 to comment as it seeks progress in a pressing matter.

NIST said a second draft is expected by summer or fall of 2022.

In a March 17 statement on the NIST website about the public comment period, Elham Tabassi, chief of staff of the NIST Information Technology Laboratory (ITL), said the lab developed the draft after extensive input from the public and private sectors, “knowing full well how quickly AI technologies are being developed and put to use and how much there is to be learned about related benefits and risks.”

“AI risks and impacts that are not well-defined or adequately understood are difficult to measure quantitatively or qualitatively,” the draft notes, giving a glimpse at regulator’s concerns. “The presence of third-party data or systems may also complicate risk measurement. Those attempting to measure the adverse impact on a population may not be aware that certain demographics may experience harm differently than others.”

It comes after formation in September of the National Artificial Intelligence Advisory Committee by the Commerce Department to work with the National AI Initiative Office (NAIIO) in the White House Office of Science and Technology Policy (OSTP), and on the heels of Algorithmic Accountability Act 2022, introduced in February 2022.

See also: AI in Financial Services in 2022: US, EU and UK Regulation

‘The Socio-Technical Perspective’

Along with publishing the draft framework for public comment, NIST released its report “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.”

AI users can introduce bias either purposefully or inadvertently, the report cautioned, and sometimes it can emerge as the system learns, perpetuating discrimination.

“Adopting a socio-technical perspective brings new requirements, many of which are contextual in nature, to the processes that comprise the AI lifecycle,” it noted. “It is important to gain understanding in how computational and statistical factors interact with systemic and human biases.”

See also: Cost of Proposed US AI Bill May Outweigh Its Benefits

A 2021 report produced by the Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS), says that AI and machine learning (ML) systems can be particularly problematic because they can’t pick up on the same contextual cues as people.

“An AI/ML system is generally as effective as the data used to train it and the various scenarios considered while training the system,” it said. “Lack of context, judgment, and overall learning limitations may play a key role in informing risk-based reviews, and strategic deployment discussions.”

See also: Companies Collaborate With Regulators to Limit AI Biases

In an interview with Sudhir Jha, senior vice president and head of Mastercard’s Brighterion unit, PYMNTS reported that “There’s a bit of lopsided embrace of AI as 79% of banks with more than $100 billion in assets use AI, but only a fraction of smaller banks do. And although progress has been made, the greenfield opportunity is significant. In 2018, 5% of FIs reported using AI systems in areas like credit risk management and fraud detection. By 2021, that figure had increased threefold to 16%.”

See also: Banks Seek AI Platforms-as-a-Service Amid Ever-Increasing Risk