Lawmakers Warn FinTechs About Potential Bias Baked Into AI-Based Financial Tools

AI

As financial technology firms grow from startups into a core part of the financial industry, the already high use of algorithm-based artificial intelligence (AI) is growing.

And it’s not just finance. In a January 2021 report, auditing and professional services firm PwC found that 86% of C-suite executives said AI would “become a mainstream technology in their companies” within the year. A quarter said AI is already in widespread use at their firms.

The promise that artificial intelligence and machine learning holds is great: Using complex algorithms, machines can be taught to mimic human intelligence, performing tasks ranging from addressing and resolving customer complaints to assessing whether to approve a home mortgage loan application in real time.

However, a letter sent by House Financial Services Committee Chairwoman Maxine Waters (D-CA) and Rep. Bill Foster (D-IL), chair of the Task Force on Artificial Intelligence, to the heads of five U.S. agencies with financial oversight responsibilities — including the Federal Reserve, Consumer Financial Protection Bureau (CFPB) and FDIC — may beg to differ.

Highlighting “the risks and possible benefits that emerging technologies pose in the financial services and housing industry,” Reps. Waters and Foster urged them to take steps to ensure that AI and machine learning tools are “used in an ethical manner, and help low- and moderate-income communities of color that have been underserved for far too long.”

More specifically, they want the government agencies to ensure that algorithmic bias is kept out of AI-based financial tools. “Financial institutions must fully understand that they are bound by these civil rights protections when building and manipulating datasets,” Waters and Foster wrote. “Regulators must subject financial institutions using AI to comprehensive audits of their algorithmic decision-making processes.”

They further specify that means having the expertise and personnel budgets to monitor protected classes, including color, religion, national origin, sex, marital status, age and the use of public assistance programs — “even when those attributes are not considered explicitly by the AI.”

Setting Bad Examples

At issue is the “mimic human intelligence” part. Humans — AI developers among them — have biases. More to the point, the fear is that programmers rely on huge data sets that could be tainted by historical bias to teach AIs about the field in which they will be providing information and recommendations.

A 2019 study of two million home mortgage applications found that lenders — who generally use AI to recommend whether to give a loan — were 40% more likely to turn down Latino applicants than whites with similar financial situations. Black applicants were 80% more likely to be turned down.

The whys are complex, but mortgage application AI is based on the input of historical data. Among other things, that includes a long history of lenders “red-lining” minority neighborhoods in which they would not make loans, and credit scores influenced by the generational wealth of white applicants — such as homeownership.

Beyond the finance industry, in 2015, Amazon overhauled its hiring algorithm after realizing it was biased against women. Why? Because it was based on resumes submitted over the previous decade, mostly by men. In 2019, an algorithm widely used by hospitals was found to be less likely to refer African American patients to treatment than equally ill white patients because it was based on historic records that reflected their better insurance and access to medical care.

Noting that “FinTechs and others using these technologies should play their part in building a more just and fair financial system in the 21st century,” Waters and Forster’s letter urged the agencies to “prioritize principles of transparency, enforceability, privacy, and fairness and equity. This will ensure that AI regulation and rulemaking can meaningfully address appropriate governance, risk management and controls over AI.”