Visa The Embedded Lending Opportunity April 2024 Banner

Attack Vectors 2024: Protecting Against What’s Next in Deepfake Fraud 

high-tech fraudster stealing audio

“Dad, please send them the money. They say they are going to hurt me.”

It is every parent’s worst nightmare: their child in trouble, being threatened.

But what if, unlike the money being transferred, the scenario being described isn’t real?

Bad actors around the world are leveraging generative artificial intelligence (AI) to create voice clones and even video deepfakes of wealthy individuals’ and business executives’ family members to extort them under false pretenses.

Several states have passed laws to regulate deepfakesMicrosoft President Brad Smith has called for measures to protect against the threat of AI manipulation and the Federal Trade Commission (FTC) has warned about impersonation scams, among other warnings — but cyber criminals operate in the shadows regardless, and today’s new AI tools have been like a steroid pill for them.

That’s because AI is reducing the effort required to manipulate targets. The AI-granted ability to generate human-like text in an instant, virtually clone loved ones’ voices based on just snippets of audio, and scale behavioral-driven attacks with the click of a button has increasingly democratized access to cybercrimes that were previously only relegated to the realm of the most sophisticated bad actors.

Compounding matters, the proliferation of digital banking and real-time money movement has made it increasingly easier for bad actors to request and receive the payments they want without any of the frictions that the Hollywood-style handing over of a briefcase full of cash in a neutral location might entail.

But while the insidious, zero-day nature of 21st century, AI-powered fraud represents a pervasive and rapidly evolving threat, for the “Attack Vectors 2024” series we unpack how the situation isn’t hopeless for those firms — and even individuals — that are educated, prepared, and proactive in their defenses.

Read more: How Year 1 of AI Impacted the Financial Fraud Landscape

Separating What’s Real From Fake in Today’s Landscape

“Now there’s been a democratization of fraud, where anyone can buy the tools and the tutorials they need to carry out successful attacks,” Michael Jabbara, vice president and global head of fraud services at Visa, told PYMNTS.

Highlighting the ease and scale with which bad actors can leverage contemporary technologies for their own illicit purposes, the hacking group Scattered Spider employed deceptive phone calls to trick customer service representatives into revealing password credentials and managed to breach numerous organizations, including MGM Resorts International, Caesars Entertainment and Coinbase, resulting in about 52 breaches since 2022.

“Utilizing generative AI, a fraudster can effectively mimic a voice within three seconds of having recorded data,” Karen Postma, managing vice president of risk analytics and fraud services at PSCU, told PYMNTS.

Behavioral-driven fraud tactics are becoming increasingly popular for breaching a merchant’s defenses, whether through business email compromise (BEC) attacks or account takeover (ATO) scams.

“There are many effective models that use AI and machine learning to learn what someone’s legitimate behavioral patterns are and replicate believable actions across the web,” Doriel Abrahams, head of risk in the U.S. at Forter, told PYMNTS. “Generative AI now gives scammers an easy and effective way of building confidence with their targets. … It can be very challenging, particularly for non-digitally native generations, to discern what’s real from what’s fake in today’s AI-driven landscape.”

Haywood Talcove, CEO of LexisNexis Risk Solutions’ government division, told PYMNTS in an interview posted in June that stolen information, such as photos, names, birthdays and home addresses, can be used to create fake video selfies for identity verification.

So, what are firms to do? Often, the best defense is a good offense. PYMNTS Intelligence finds two-thirds of FinTechs are being spurred by the rise of deepfakes to boost their fraud prevention budgets.

See also: Unmasking Digital Imposters Is Rising Priority for Industrial Economy

Fighting Synthetic Fraud Requires Real, Human Defenses

“There’s a beautiful upside [to generative AI] that can reduce cost and drive much better customer experience,” Gerhard Oosthuizen, chief technology officer at Entersekt, told PYMNTS. “Unfortunately, there is also a darker side. People are already using ChatGPT and generative AI to write phishing emails, to create fake personas and synthetic IDs … now [fraud] is direct, and it is fear-based … Organizations have to deal now with more of the psychology of how to protect their customers than just providing a pure tech solution.”

While generative voice AI is becoming worryingly good at cloning and generating voice outright — a highly effective way of duping both scam victims and potentially compromising biometric gateways — there is still one avenue that it can’t penetrate: the actual person they are pretending to imitate.

Returning to the situation at the start of the piece, the phone call where a child is purportedly pleading for a relative to “do as the fraudster says” — by calling the child or someone close to them, the illusion the bad actor is attempting to create can be punctured.

Fraudsters seek to create emotionally driven scenarios where decision-making can be rushed and erratic, but a crucial first step for end-users and businesses being attacked via a new vector is simply taking a beat and validating the context of a threat occasion.

After all, the weakest link is frequently the human link, which means training employees on scam identification and prevention is critical to building an effective front-line defense.

In order to best protect potential attack vectors against sophisticated attacks, the importance of using proper digital hygiene is crucial for both businesses and individuals, as is deploying orchestrated, multilayered solutions to authenticate and validate any requests.

This proactive approach not only safeguards institutions and their customers from financial losses but also ensures a more robust defense against evolving fraudulent tactics.

PYMNTS Intelligence found that companies relying on legacy and manual verification solutions lose above-average shares of annual sales to fraud, at 4.5%. However, firms using proactive and automated solutions, such as those powered by AI and machine learning (ML), typically reduce their share of lost sales to 2.3%.

“In the payments security space, just as in the consumer space, you’re seeing a massive investment in AI. It’s a buzzword — but it has to be, as bad actors evolve in the ways they are attacking businesses,” Jeff Hallenbeck, head of financial partnerships at Forter, told PYMNTS.

“It’s all in the spirit of getting the fraudsters out of the system,” Mike Lemberger, senior vice president, regional risk officer for North America at Visa, told PYMNTS. “But it takes an ecosystem, it takes time, and it takes investment to get there. The innovations already exist, and a lot comes down to implementation and priorities.”