Banks Need to Be on the Cutting Edge of AI’s Double-Edged Fraud Sword

The greatest innovations are those that democratize access to new skills and empower populations.

Generative artificial intelligence (AI) promises to be one of those innovations.

But a side effect of that democratization is that it can be used by anyone — even criminals and bad actors.

And as AI continues to evolve, so do the tactics of fraudsters.

“Everyone has an equal ability to deploy technology, no matter who they are,” Karen Postma, managing vice president of risk analytics and fraud services at PSCU, told PYMNTS.

Generative AI programs like OpenAI’s ChatGPT have made phishing and other behaviorally-driven fraud techniques not only more effective and convincing, but also easier to conduct on a larger scale.

“Utilizing generative AI, a fraudster can effectively mimic a voice within three seconds of having recorded data,” Postma said.

Fraudsters can use these recordings to impersonate individuals, potentially deceiving even the most cautious of consumers. The proliferation of AI-generated voices in scams poses a serious threat, eroding trust and making it difficult for individuals to discern genuine calls from fraudulent ones.

Staying Ahead of Fraudsters Means Never Losing a Step 

Because fraudsters are quick to adapt to new technologies and are relatively unconstrained by regulations or moral considerations, the pace of play bad actors take can make it challenging for credit unions and other financial institutions to keep up. 

“Fraudsters are utilizing AI to not just commit attacks, but to become very good at committing these attacks,” Postma said.

She added that traditional guardrails and red flags, like CVV (card verification value) mismatch, account not on file and number of declines, are becoming less reliable as cyber criminals increasingly use AI for their attacks.

Adding to the challenge is that today’s bad actors operate across multiple channels, and detecting their activities requires a cross-functional analysis of data.

“If you have a tool that is monitoring your call center, a tool that is monitoring your online banking, and a tool that is monitoring your transactions — they might only be singularly seeing individual interactions, which might not necessarily look suspicious, but are really part of a pattern of bad activity,” Postma explained.

This requires financial institutions to adopt a more holistic approach to fraud detection that combines data from various channels. 

“From a historical perspective, financial institutions don’t combine data and utilize data cross-functionally very well. But in order to prevent AI fraud, it is becoming more important to combine data and execute on the pattern analysis very, very quickly,” Postma said.

Fortunately, the ongoing digital transformation is “providing data that we would have never had available historically,” she added, “which helps accelerate the ability to identify and prevent fraud.”

More here: Using AI to Combat AI Fraud in the Credit Union Space

Keeping Cybercriminals Out While Letting Customers In 

That’s why financial institutions must leverage new technologies like AI, machine learning and techniques like orchestrated data analytics to protect themselves against attacks.

“Some organizations are still debating how to leverage AI — but we’re beyond ‘how’ at this point. Firms need to leverage AI now; they need to have done it yesterday,” Postma said.

By collecting and analyzing real-time data and using AI to parse it and identify patterns, financial institutions can quickly identify suspicious patterns of activity across channels. This proactive approach enables them to stay ahead of fraudsters who are constantly evolving their tactics.

“For financial institutions without the infrastructure in place to effectively defend themselves against AI attacks, partnering with an organization that does is their best and easiest bet to defend themselves. The conversation around using AI to defend against AI needs to be accelerated,” Postma said.

“You’re not in that boat by yourself. There are a lot of historical patterns, a lot of opportunity to leverage what the AI models have already been able to prevent and start to see that benefit right away,” she added.

These partnerships provide access to valuable consortium data and expertise that can significantly enhance fraud detection and prevention.

And as Postma noted, “bad actors are sharing their knowledge and tactics with each other,” too.

A collaborative approach can help create a unified front against fraudsters, making it more challenging for them to exploit vulnerabilities.

Consumer education is also paramount. Fraud awareness campaigns, especially on social media, can help educate members about common scams and how to recognize them. These campaigns should be ongoing and adapted to the evolving landscape of fraud.

While technology is invaluable in fraud prevention, there is no substitute for the human touch.

“I am a firm believer in letting technology do 98 to 99 percent of the workload. But while AI is very useful, it will never replace human action in its entirety,” said Postma.

As for what she sees the future holding?

“Issuers and merchants typically do not work well together, and I think we need to overcome some of the traditional walls that have come up in the issuer and merchant relationship,” Postma said. “There’s a huge amount of information we know about the consumer and there’s a huge amount of information that the merchant knows about the consumer as well, but they’re not the same … and the ability to collaborate and share that data to fight fraudsters together versus us both fighting the fraudsters independently gives the best chance of success.”