Digital Banking

Building A Responsible AI Roadmap

FIs are turning to artificial intelligence to make more financial decisions than ever before, from deciding who gets (or doesn’t get) a loan to helping customers make smarter financial choices. But when algorithms are used to make decisions, who accounts for the biases built in the model? In the latest Digital Banking Tracker, Dan Schrag, co-director of Harvard’s Belfer Center’s Science, Technology and Public Policy program, explains why the time has come for FIs to get ahead of the ethical dilemmas surrounding AI, or risk losing customers to confusion.

AI is currently one of the most-watched technological developments — and for good reason. After all, almost every major industry is implementing AI-based solutions to help improve its operations.

Banks, credit unions, governments and healthcare companies, among others, have all developed an interest in AI and machine learning solutions, largely because both technologies are capable of rapidly reviewing and processing vast amounts of data. This is the key to why they are so appealing: Companies can more quickly and confidently respond to such analysis.

For all the benefits that AI solutions intend to offer companies, governments, manufacturers and FIs, they invite just as many potential issues. That list includes how automated solutions can impact — or even eliminate — a number of human jobs.

A recently launched initiative takes an academic approach to developing a more responsible AI policy, however. In April, the Harvard Kennedy School’s Belfer Center for Science and International Affairs announced a collaboration with Bank of America to launch the Council on the Responsible Use of Artificial Intelligence. The council aims to join leaders from a wide range of industries — including finance, government, business and academia — to formulate a strategy for addressing AI and its challenges.

PYMNTS recently spoke with Dan Schrag, co-director of the Belfer Center’s Science, Technology and Public Policy program, to gain greater insight into the council’s mission. With so many industries looking to AI to deliver greater efficiencies, Schrag believes the council will serve as a broader conversation among the leaders in the space, helping them to better understand the opportunities as well as the possible problems.

“There needs to be a discussion of what is responsible, what is the appropriate use of these technologies and where the limits are,” he said.

Widening the AI discussion

The council, which will officially convene later this year, will join leaders from different industries to explore both the ethical challenges of AI and the responsible ways to implement it.

The formation of the council was necessary, Schrag said, because the AI debate is often a one-sided discussion among tech players like Google, Amazon, Facebook or Twitter. The trouble with these tech companies leading the AI implementation charge is that other industries’ views and concerns could go unaddressed.

“The key is taking it outside the realm of just the tech sector,” Schrag said.

Inviting leaders from several industries to participate in the discussion also brings their thoughts on how AI will be used in their respective industries. With all parties gathered for the discussion, the council will work to discuss appropriate AI uses across a wide range of scenarios.

To err is human

One of those scenarios for consideration is exactly how AI-based solutions arrive at their decisions, giving consumers and companies a better understanding of how these systems think.

The appeal of AI is that it can quickly analyze large sums of data to provide a more informed decision. In the financial sector, though, these decisions can have far-reaching consequences, such as whether a customer is approved for a mortgage or a loan to expand his business.

AI and machine learning solutions rely on human-designed algorithms to reach their decisions, but the humans behind them are sometimes unaware that their personal biases and attitudes can inadvertently be introduced into the equations. As Cathy O’Neil, Harvard Ph.D. mathematician and author of Weapons of Math Destruction, told a PYMNTS Innovation Project audience last year, algorithms are intended to be neutral, and are often perceived as such, because they rely on math. But, they are built using data that is defined by conditions for success, which can be subjective.

An algorithm can become a Weapon of Math Destruction (WMD) when it has a long-lasting impact on an individual’s life, when its use is widespread and when it is poorly understood outside the technicians who designed it, she added.

Developing a deeper understanding of how AI arrives at its decisions is an essential component of the council’s mission, according to Schrag. Without this, a system can become a “black box,” or a device that intakes information and produces answers without a transparent view of how it arrived at those conclusions.

“Let’s say a computer turns you down for a loan,” Schrag said. “Do you have the ability to challenge that? How difficult is it?”

Without knowing how an AI solution reached its decision, that system could potentially hold a strong sway over individuals’ lives — in the financial sector, business or beyond.

‘Reskilling’ the workforce

Schrag noted that the rise of AI in finance and other sectors leads to another ethical challenge: how to treat employees whose jobs are at risk of becoming obsolete as a result of task-automating technology. In banking, for example, chatbots could be used to reduce the number of customer service associates and ATMs can reduce that of live tellers.

While advancements in technology can put certain jobs at risk, they can also invite new opportunities. Part of the council’s mission is to explore ways to “reskill” the workforce, thereby addressing the needs of workers who are displaced by AI-based solutions.

“In general, a broad class of technologies are going to offer dramatic and exciting opportunities for people,” Schrag said, noting that society has already had to adjust to changes stemming from technological solutions. “It turns out we figured out how to use the assembly line effectively [when it debuted]. There were ethical concerns, and they weren’t necessarily dealt with perfectly, but society adapted. Some jobs were lost, but new jobs were [also] created.”

As AI becomes more prevalent in banking and business, the council’s role is to assist these industries in addressing the early-stage ethical challenges that AI-based technologies present — and doing so now, instead of dealing with the consequences down the road.

“Given these [solutions] that are emerging that represent exciting changes in technology, it’s good to be out in front of these ethical issues and actually talk about what’s ‘responsible’ before there’s a crisis,” Schrag explained.

As companies and businesses turn to AI to process data and quickly arrive at informed decisions, the ethical questions of these solutions still warrant exploration and discussion. It’s likely better to have these discussions early on and prepare now, though, than to react to their potential consequences later.

. . . . . . . . . . . . . . . .  

About the Tracker

The Digital Banking Tracker™, powered by Feedzai, brings you the latest news, research and expert commentary from the FinTech and consumer banking space, along with rankings of over 300 companies serving or powering the digital banking sector.

First Name*

Last Name*



Work Email*



New PYMNTS Report: Preventing Financial Crimes Playbook – July 2020 

Call it the great tug-of-war. Fraudsters are teaming up to form elaborate rings that work in sync to launch account takeovers. Chris Tremont, EVP at Radius Bank, tells PYMNTS that financial institutions (FIs) can beat such highly organized fraudsters at their own game. In the July 2020 Preventing Financial Crimes Playbook, Tremont lays out how.