Fed Official: AI Can Promote Bias in Lending Decisions

Instant disbursements are in demand in lending, but lenders — and borrowers — still cling to branches and legacy processes. FinTech partnerships can help.

Michael Barr has become the latest regulator urging caution on the use of artificial intelligence.

In a speech Tuesday (July 18), Barr — the Federal Reserve’s Vice Chair for Supervision — said that while artificial intelligence (AI) has the potential to expand credit to people who couldn’t access it otherwise, it also carries risks.

“Use of machine learning or other artificial intelligence may perpetuate or even amplify bias or inaccuracies inherent in the data used to train the system or make incorrect predictions if that data set is incomplete or nonrepresentative,” Barr told the National Fair Housing Alliance 2023 National Conference in Washington. 

He added that AI could further the risk of “digital redlining in marketing — the use of criteria to block minority applicants or their communities. 

“New technologies can also result in ‘reverse redlining,’ or steering in the advertisement of more expensive or otherwise inferior products to minority communities,” Barr said, 

He also noted that while banks were still early in the process of adopting AI, the Fed was making sure its supervision stays on pace.

Barr’s speech came one day after Rohit Chopra, director of the U.S. Consumer Financial Protection Bureau (CFPB), announced he was collaborating with his European counterpart on a number of issues, including the use of AI in lending decisions.

Chopra had addressed the topic earlier this year in an interview with The Associated Press on his agency’s work to tackle some of the challenges presented by AI.

“One of the things we’re trying to make crystal clear is that if companies don’t even understand how their AI is making decisions, they can’t really use it,” he said. “In other cases, we’re looking at how our fair lending laws are being adhered to when it comes to the use of all of this data.”

These concerns, as noted here last week, illustrate how AI has become a “non-artificial” threat for regulators.

“That’s because while the apocalyptic doomerism surrounding the technology’s threat to humanity as a whole may be overblown, there still exist right-now risks to individuals as related to the novel tech’s ability to generate deepfakes and spread nefarious misinformation — whether by design or inadvertently,” PYMNTS wrote.

That sort of concern has apparently led the Federal Trade Commission (FTC) to launch an investigation into whether OpenAI and its chatbot ChatGPT have harmed people by publishing false information about them.

“As the world goes digital, the risks of doing business do too,” PYMNTS wrote. “While the world waits on AI regulation, now is a good time for firms to ensure their data compliance programs adhere to the rules already on the books around innate sensitivities.”