Why Aren’t More FIs Using AI To Fight Fraud?

Payment fraud is an ideal use case for machine learning and artificial intelligence (AI), and has been used by financial institutions (FIs) to great effect. In fact, a typical consumer who gets a fraud alert might not even know there was an algorithm working behind the scenes that spotted suspicious activity.

One of the prominent findings in PYMNTS’ AI Innovation Playbook is that as with many innovations, there is a disconnect between perceived value and actual adoption.

Most FIs (63.6 percent) believe AI systems are effective in reducing fraud, yet only 5.5 percent have implemented one.

Despite lower perceived value in data mining (28.4 percent) and business rule management systems (17.6 percent), nearly all (92.5 percent) use data mining and 65 percent use business rule management systems (BRMS).

What’s standing in the way?

Beyond insufficient real-time capabilities, lack of transparency (42.8 percent) was an issue among fraud specialists, and inability to quantify ROI was cited by more than one-third (36.5 percent).

A majority (60 percent) think AI systems are time-consuming and complicated, while just 8.1 percent think the same of data mining. Fraud specialists also do not believe transparency is as big of a problem in data mining as with AI; just 29.7 percent of them cited it as a shortcoming.

The perceived issues around transparency and complication aren’t necessarily inherent to AI systems, but is more a reflection of how these systems are presented and explained to those using them.

Smart agents, which learn and make real-time observations from interactions with human users, have applications for fraud detection because they learn clients’ typical financial behaviors and are sensitive to unusual transactions.

“From an AI point of view, we have to build a system that’s really adept at identifying different types of users and their typical commerce patterns,” said Scott Boding, vice president of risk solutions product management for Visa’s CyberSource payments management platform team, in an interview with PYMNTS.

A majority (63.9 percent) of fraud specialists believe smart agents could reduce payments fraud. And fraud departments have higher levels of interest than other business units; 42.5 percent of fraud departments are “very” interested in using smart agents to reduce fraud, while 36.7 percent in accounts payable and 25 percent in payroll had similar interest levels.

Smart agents are also key to offering users better personalization. According to Akli Adjaoute, CEO of Brighterion, “You have to know (consumers) by their unique and differing behaviors. Other tech, such as business rules and data mining, cannot provide such personalization for financial institutions.”

Reducing manual review is the largest (66.7 percent) expected benefit from implementing smart agents among FI fraud specialists. Though across business units, far more (80.4 percent) expect reducing manual review to be a benefit.

“We are trying to use AI as a way to seize and automate a lot of the heavy lifting in fraud detection – a time-consuming task that many merchants are doing manually today,” Boding said.

The biggest inhibitor is cost, however. All (100 percent) of fraud specialists said implementation of smart agents would be too expensive. This is a concern with many emerging technologies, but taking a wait-and-see approach can hinder an FI from innovating.

Payroll departments are more likely to eschew smart agents because they find the results to be untrustworthy, while nearly three-fourths (73.7 percent) of accounts payable departments perceive smart agents as lacking in tangible benefits.

Cost challenges aside, Visa’s Boding mentioned a common misconception with AI is that it uses vision or knowledge, when in reality it’s a tool to recognize patterns, predict behaviors and inform decision-making. “If we’re seeing a deviation from one of these typical patterns, the system takes an even closer look and submits it for more evidence, and [we] have a better sense of whether this is a legitimate user who has just changed behavior or whether it is a fraudster trying to commit bad acts,” he said.