PYMNTS Intelligence: Countering Rising Fraud Threats to CUs

Generative AI programs like ChatGPT have made phishing and other fraud techniques not only more effective but also easier to conduct on a larger scale.

Generative AI programs like ChatGPT have made phishing and other fraud techniques not only more effective but also easier to conduct on a larger scale.Fraud attacks against banks and credit unions (CUs) are nothing new, but consumers are taking them much more seriously than they used to. A study found that 74% of consumers rate fraud protection as a top-three priority when opening a new financial account, outstripping ease of use at 61% and good value for money at 46%.

CUs will need to implement strong measures to protect themselves and their members from fraud — or watch members take their business elsewhere. This month’s PYMNTS Intelligence explores emerging fraud threats within the CU landscape and the strategies institutions can deploy to safeguard their members, including artificial intelligence (AI).AI can greatly augment existing fraud techniques like phishing.

New AI tools Intensify and Multiply Existing Fraud Threats

Phishing attacks have long been a threat to financial institutions (FIs) and their customers. Scammers pose as FIs to trick customers into disclosing account numbers, passwords and other sensitive information, which the fraudsters can then exploit for their own gain or disseminate on the dark web. A study showed that the rate of phishing attacks has grown by 150% annually since 2019, with more than 1.3 million being recorded in Q4 2022 alone.

Generative AI programs like ChatGPT have made phishing not only more effective but also easier to conduct on a larger scale. These programs can effectively replicate a natural writing style with minimal input from fraudsters, allowing them to generate and send persuasive phishing emails to thousands of victims every hour.

AI can also be leveraged to create deepfake images to fool CUs’ identity verification protocols. Many FIs require members to upload their photo IDs when creating new accounts. However, fraudsters can easily counterfeit these documents using stolen photos from social media. Other criminals use deepfake tools to create authentic-sounding voices from nothing more than a 30-second voicemail recording, allowing them to gain access to member accounts over the phone.

Fighting AI With AI

Turnabout is fair play when it comes to AI and fraud, and many FIs have achieved success by deploying smart systems to fight back against AI-powered schemes. PYMNTS research found that companies relying on legacy digital identity verification solutions lose above-average sales revenue to fraud at 4.5%, while firms leveraging solutions powered by AI and machine learning (ML) reduced their share of lost sales revenue to 2.3%.

The AI tools leveraged by FIs track contextual and behavioral cues to identify and help prevent attacks. For example, an AI system can recognize abnormal or excessive withdrawals from a CU account, potentially indicating that a fraudster is at work. Tools like these will be critical to protecting CUs and their members as bad actors accelerate and expand their attacks.