Generative AI Co-Pilots Help Banks Beat Back $4 Trillion in Financial Crime

Today’s online world offers more vulnerabilities to exploit, and criminals have taken notice.

“The total value of money laundering globally per annum is about $4 trillion,” Mike Foster, president and CEO of SymphonyAI NetReveal, told PYMNTS CEO Karen Webster.

“That’s about 4% of global GDP,” he emphasized.

And it’s not mom-and-pop bad actors responsible for the bulk of the illegal activity.

Foster explained that “the business of money laundering is equally huge… a significant amount of money is being laundered by highly organized, vast organizations that look in many instances like everyday corporate companies — only they are unencumbered by budgets and governors.”

While digitization has provided firms and institutions better and more agile ways to defend and protect themselves, new tools like generative artificial intelligence (AI) have also inadvertently opened a Pandora’s box of modern and attractive tactics for cybercriminals operating at any scale.

That’s because cybercriminals tend to be sophisticated and seek out new ways to defraud their targets, whether individuals or businesses.

“All of the things that [organizations] are trying to mobilize and solve for, the [bad actors] are doing the same thing but the opposite,” Foster said. “They’re scaling up their organized crime groups, they’re scaling up their tech and the way in which they invent new tactics for fraud and laundering their gains at an increasingly aggressive rate.”

See also: Online Scam Complexity Is Paradoxically Keeping Banks’ Fraud Controls Offline

Tapping Into AI’s Ability to Disrupt Illegal Activities

Within this fast-moving landscape full of pitfalls, organizations that continue to rely upon manual and reactive anti-fraud tools experience slower growth than those using proactive and automated solutions, according to data in the 2023 PYMNTS playbook “Digital Payments Technology: Investing in Payments Systems for the Digital Economy.”

“AI is not a new thing, but the readiness to adopt it has probably been hindered by some fear of the unknown,” Foster said.

PYMNTS data supports Foster’s theory. Four in 10 executives were concerned about the potential complexity involved in the day-to-day use of new technologies to fight fraud, and the same 40% also cited a presumed integration complexity of marrying new fraud prevention controls with existing systems as a factor holding them back.

Still, leveraging AI to fight fraud is one of the best options available to organizations. In today’s environment, “you’re not just trying to find the needle in a haystack; you’re trying to find a needle in a stack of a billion needles that all look the same,” Foster explained.

“In essence, it is getting to the point where there just aren’t enough people with the right skills,” he added. “A human can only do so many things over an eight- or 10-hour shift.”

He said AI is “exposing more” as it optimizes the way in which potentially illicit activity is found, prioritized and managed.

“And if you have to do more with more,” Foster said, AI is your best bet — particularly as transactions increasingly become digital.

Enhancing, Not Replacing, Human Oversight

AI can empower firms to build case files in real time, and future-fit generative tools can take historical predictive capabilities to the next level by understanding through large data sets what investigators are looking for.

Yet while the human element might be the biggest vulnerability a properly protected enterprise has, anti-fraud tools “ultimately rely on a human being,” said Foster.

What AI does, he explained, is “dispense with the rubbish” at the top of a fraud alert funnel, helping change the shape “from a funnel to a nail head, accelerating the workflow of what [businesses] should be interested in.”

Human judgment plays a very important part, Foster said, emphasizing that “the value of AI isn’t to reduce the number of people in your organization.”

Still, when applying generative AI capabilities to the world of financial crime fighting, there exists a bevy of exciting future possibilities.

“Once a particular fraud is disrupted, it is no longer economically viable, so [bad actors] take on a new engagement model,” Foster explained.

He said he sees great potential in leveraging generative AI to help predict just what those future engagement models might look like and to help organizations defend themselves pre-emptively against them.

“Generative AI is excellent at creating synthetic data of a great quality, which can let [firms] build a deep, rich picture,” he said. “And when you have people with really great [anti-fraud] skillsets and experience, they can train the models to predict in advance what the next thing might be, so you can be ahead of it. The whole spatial, temporal piece as it relates to the profile of the crime group is really important.”