Why Whack-a-Mole Risk Prevention Won’t Work in Today’s Data Economy

fraud prevention

When it comes to digital risk, the best defense is a good offense.

That’s because while firms may be feeling whiplash regarding the hyper-rapid advances of innovative technologies like generative artificial intelligence (AI), bad actors are themselves feeling a wave of euphoria at all the new tools they’ve been able to add to their toolkit.

After all, with every phase shift of technological innovation, there emerges opportunities for fraudsters to find new attack vectors and target future-fit vulnerabilities.

This, as it was recently revealed that FinTech firm Revolut suffered a series of fraudulent attacks where criminals stole more than $20 million of funds over several months leading up to and during 2022 — an amount equal to almost two-thirds of the platform’s annual profit the year prior.

In a landscape where fraud is everywhere, it is not enough for firms to reactively play whack-a-mole — enterprises must now increasingly take a multilayered approach to stymying cybercriminals before they ever reach the gates.

Compounding the risk to firms, particularly larger more attractive ones, is the fact that insufficient and poorly trained employees, outdated legacy processes, and a growing distribution of data and computing resources, are all combining to increasingly compromise enterprise security, establishing new layers of operational complexity to defend.

While they aren’t glamourous, a sound technical infrastructure paired with strong processes carried out by well-trained employees, re-prioritizing compliance and control programs, and the integration of automated, end-to-end anti-fraud solutions have never been more important for businesses.

Read more: Cybersecurity Is Driving Banks to the Cloud

Insufficient Staff and Outdated Processes Expose Firms to Risk 

The threat of cybercrime is increasing as bad actors and cybercriminals are leveraging cutting-edge tools to attack businesses and individuals.

Fake invoice schemes, account takeover (ATO) scams and business email compromise (BEC) attacks are particularly dangerous and can result in data breaches, financial theft, fraud and extortion.

Technology such as generative AI has created new opportunities for scammers and has made it much easier and more efficient for them to commit fraud.

Fraud-related costs now amount to 2% to 5% of organizations’ annual revenues, as reported in the “FinTech Risk Management Playbook: Combating B2B Payments Fraud,” a PYMNTS and nsKnox collaboration.

That’s why organizations’ defenses of their exposure points should leverage sophisticated strategies that employ a potent mix of future-fit technology, data and analytics, and educational best practices for their employees.

Fortunately, the use of new technology goes both ways. Enterprises need to take advantage of new tools like AI before they themselves get taken advantage of by bad actors.

“When you think about financial services, the [immediate use case of AI] is obviously fraud protection,” Jeremiah Lotz, managing vice president, digital and data at PSCU, told PYMNTS.

The use of technology to defend against attacks and build strong risk programs should be leveraged most by those firms that get by on relatively lean compliance and cybersecurity departments.

See also: It’s Not Enough That Businesses Win — Fraudsters Must Also Lose

Today’s Fraudsters Are More Sophisticated Than Ever

Fraud detection and protection are areas of financial services where AI-powered solutions have been working in the background for several years — but enterprises shouldn’t take these legacy processes for granted.

Just as the capabilities of modern AI solutions far outstrip historical, predictive algorithms, so too does the potential for the misuse and abuse of the technology toward nefarious ends.

PYMNTS research found that organizations relying exclusively on legacy processes and tools may find themselves vulnerable to modern fraud attacks.

“There’s a beautiful upside [to generative AI] that can reduce cost and drive much better customer experience,” Gerhard Oosthuizen, chief technology officer at Entersekt, told PYMNTS in February. “Unfortunately, there is also a darker side. People are already using ChatGPT and generative AI to write phishing emails, to create fake personas and synthetic IDs.”

Doriel Abrahams, head of risk at Forter, agreed. Abrahams told PYMNTS in May that while organizations can leverage AI and machine learning (ML) tools to train anti-fraud models and establish robust controls, “fraudsters can do the same … as a general rule of thumb, [bad actors] tend to be very sophisticated.”

Ninety-five percent of executives surveyed by PYMNTS said they consider using innovative solutions to improve fraud detection and compliance a high priority.

While that number may seem high, it’s important to remember that 100% of cybercriminals are already using innovative solutions to probe enterprise defenses.