From Gift Cards to Email Compromise, Behavioral Scams Present Ongoing Threat

Behavioral scams, cybersecurity, fraud

The rise of generative artificial intelligence (AI) has reshaped the cyber threat landscape.

Firms increasingly rely on AI, and this reality is set to accelerate as the information ecosystem continues to be transformed by innovations that not only present new opportunities, but also open up enterprises to more sophisticated threats than ever. 

Still, it remains important to remember that, despite all the modern bells and whistles today’s bad actors and crime groups have at their disposal, popular cyberattacks like transaction fraud, account openings, account takeovers and other behaviorally driven scams ultimately rely on the successful manipulation of a human agent to work. 

This, as two cases announced Tuesday (Sept. 19) by the United States Attorney for the Southern District of New York (SDNY) emphasize that coordinated behavioral scams aren’t going anywhere — in fact, they are only accelerating in light of the new tactics technology is giving scammers. 

In one case, the SDNY charged an individual for a business email compromise scheme that targeted a Manhattan hedge fund and a Missouri hospital system by impersonating their senior officers and executives in order to direct payments for false invoices. 

In the second case, the SDNY charged two individuals with engaging in a “brazen scheme” to obtain gift card information worth millions of dollars from victims through lies and false impersonations.

That’s why, to best protect their perimeter in today’s shifting environment, it is increasingly critical for firms to batten down the hatches by controlling what’s controllable — which frequently starts with educating their own employees about persistent zero-day threats inherent to the digital landscape. 

Read alsoUnmasking Digital Imposters Is Rising Priority for Industrial Economy 

Increasing Fraud Heightens Need for Newer, Better Technologies 

According to PYMNTS Intelligence, based on a survey of 200 executives at the largest banks in the U.S., fraud has increased for 43% of financial institutions (FIs) compared to 2022, with the average cost of fraud increasing by 65% for FIs with assets of $5 billion or more. 

One of the reasons for the recent uptick in fraud is that new generative AI capabilities are giving bad actors more avenues to activate historic vulnerabilities, allowing them access to sophisticated approaches — such as the use of synthetic digital identities and AI voice clones — that were once relegated to the realm of true professional crime syndicates.

Up until recently, fraudsters needed to have a certain amount of technical expertise to craft a malicious code.

“You needed to build your own toolkit, and cybercrime had been the domain of well-organized, well-funded gangs,” Michael Jabbara, vice president and global head of fraud services at Visa, told PYMNTS. “But now there’s been a democratization of fraud, where anyone can buy the tools and the tutorials they need to carry out successful attacks.” 

Echoing that sentiment, Tobias Schweiger, CEO and co-founder of Hawk AI, told PYMNTS that “the application of technology isn’t just reserved for the good guys … and bad actors are accelerating what I would call an arms race, using all of those technologies.” 

“As a financial institution, one has to be aware of that accelerated trend and make sure your organization has enough technology on the good side of the equation to fight back … Upgrading solutions and systems to also include machine learning (ML) and AI is really the only way [forward],” he added.

Against a broader backdrop where bad actors are getting better at bypassing older, legacy lines of fraud defense, the onus is now on organizations to elevate their own systems and processes to combat cybercriminals’ investment in fraud capabilities.

And firms’ investments shouldn’t end with just a modernized tech stack. Frontline education of employees is also crucial.

See moreFraud Losses From Impersonator Scams Double For Largest US Banks

Effective Digital Hygiene

Criminals leveraging future-fit attack strategies like generative AI-driven behavioral scams represent a problem that nearly every industry is facing.

Bad actors often impersonate employees they have studied on social media and other digital platforms, using this information to generate fresh passwords during phone calls to company help desks.

And as bad actors get smarter, businesses need to ensure they are as quick as the criminals, as well as more precise at identifying problem behaviors — and vulnerable areas — in real-time.

In the past, anti-fraud decision making relied on strategies including keyword analysis, number of mentions, and sentiment analysis, but these legacy methods lack the nuance required for a holistic defense program in today’s environment.

Fortunately, innovative technologies can help firms be proactive in their defense.

“The [immediate use case of AI] is obviously fraud protection,” Jeremiah Lotz, managing vice president, digital and data at PSCU, told PYMNTS.

And while tapping AI-powered tools to boost fraud defenses isn’t necessarily a new approach, Lotz explained that today’s generative AI solutions can “take things to the next level by looking at deeper, more personalized experiences” in order to support identity verification and transaction authorizations, as well as flag suspicious behavior by better analyzing behavioral patterns.

After all, there is a growing amount of innovation and investment in the fraud space from fraudsters, making investing in an agile defense program table stakes for modern businesses.

And interestingly, rather than outsourcing fraud detection and protection, PYMNTS Intelligence finds many firms are moving to develop solutions in-house instead.