How Merchants Fight Fire With Fire When Facing AI-Equipped Fraudsters

Digital fraud has been a steadily growing problem over the past several years, but the pandemic has made it impossible to ignore.

Bad actors have been preying on businesses’ and individuals’ economic- and health-related anxieties to stage account takeovers, credential stuffing attacks, impersonation schemes and a plethora of other scams. Half of all companies in one study said that they had been targeted by fraud over the past two years, with firms experiencing six attempts on average. These businesses lost $42 billion to fraud, with 13 percent of companies affected saying they lost more than $50 million each.

Many of the schemes targeting businesses are nothing new, but fraudsters’ attempts are growing more and more sophisticated due to advanced technologies like artificial intelligence (AI) and machine learning (ML), which allow them to stage more complex attacks as well as simultaneously deploy myriad scams. The only effective way for companies to fight back against this technological onslaught is to harness advanced AI and ML tools of their own.

The following Deep Dive explores the AI-powered fraud schemes wreaking havoc on merchants and other businesses and how organizations can leverage adversarial training with their own AI systems to keep themselves and their customers safe.

How Fraudsters Leverage AI And ML

Fraudsters typically use smart technologies to make individual attacks more effective or to rapidly perpetrate multiple attacks. Social engineering attacks are schemes that fit into the former category. They entail fraudsters deceiving victims into taking malicious actions, such as providing confidential information, making bank transfers or opening malware.

Many fraudsters leverage ML to scour online databases and social media websites to find potential victims, but others use it more creatively. One fraud ring’s particularly advanced social engineering attack used an AI-based algorithm to replicate the voice of a company’s CEO and request a bank transfer of 220,000 euros ($262,000). The attack was so effective that the victim recognized the CEO’s voice without confirming who was on the line and completed at least one fraudulent wire transfer before growing suspicious and notifying authorities.

Other fraudsters use the same scams on which they have always relied but are deploying AI and ML to boost the scale of their attacks. Phishing is a common fraud technique that has become even more widespread during the pandemic, with fraudsters impersonating tax or public health officials to trick potential victims into forfeiting private information.

Most would-be victims are savvy enough to recognize obvious scams and delete them, meaning that fraudsters must often deploy millions of emails en masse to make a profit using this scheme. ML can help them significantly in this endeavor by allowing them to send out more and more emails and helping them refine the messages that these emails contain.

ML algorithms can scour online platforms to learn what they look like and assess the writing styles they use, then leverage that data to mass-produce emails that attempt to impersonate this style. These programs can also examine which phishing emails are caught in email platforms’ spam filters and tweak them so they will be more likely to reach recipients’ inboxes.

AI and ML tools are helping fraudsters get smarter and faster, leading to more consumers seeing their money and personal data stolen. Merchants and other businesses will therefore need to fight fire with fire by deploying their own AI tools to match them.

How Adversarial Training Improves AI- And ML-Based Fraud Prevention

Fraudsters deploy AI and ML technologies to learn their targets’ defenses and determine how to thwart or overwhelm them, but many AI-powered fraud prevention systems approach this process in reverse to keep businesses and customers safe. One of the most effective ways of deploying AI to fight AI-powered fraud is called adversarial learning, in which an ML model is trained using examples of real-life fraud. Such training sessions are typically conducted using ML models that are largely matured and ready to be deployed.

Adversarial training is a slow and expensive process, however. ML prevention programs in training must individually probe each example of AI fraud for weaknesses, and the system must be retrained on each new example found. Researchers are currently working to streamline and optimize this process, including by developing parallel neural networks to share threat intelligence between disparate ML systems. Adversarial training is thus typically the domain of large businesses with dedicated in-house security teams or third-party threat researchers.

Almost all technologies grow cheaper and more accessible over time, and adversarial training is likely to follow the same trajectory. This is good news as firms can use all the fraud-fighting help they can get to beat fraudsters at their own AI-enhanced games.