Digital Application Fraud In The Spotlight

Digital Application Fraud In The Spotlight

Pandemic-related chaos is an ideal environment for cybercrime, as merchants and financial institutions (FIs) have been discovering since the COVID-19 outbreak went global. With workflows disrupted and personnel dispersed, a rise in application fraud is being detected. When fraudsters open fake accounts and successfully drain them, it’s pure loss.

This is where machine learning (ML) and artificial intelligence (AI) are really proving their worth, as such best-in-class solutions “…are adept at finding fraudulent activities that many human bankers would write off as risky loans, especially when bad actors are using fabricated identities, which leave no identity theft victims to bring attempted fraud to FIs’ attentions,” according to the March 2020 Digital Fraud Tracker,® a DataVisor collaboration.

Delving into other topics of intense interest in the sector – like the four million cybersecurity jobs that need to be filled immediately in a thriving industry – the latest Digital Fraud Tracker® contains valuable prescriptive insights for combating application fraud.

Chaos vs. Control

The application fraud surge finds FIs and merchants retooling authentication measures, with AI, ML and unattended machine learning (UML) being variously arrayed against the threat of application fraud. The alarming sophistication of these illegal operators has necessitated the most advanced tools in the fight against this type of “sleeper fraud.”

Instant money is becoming the new normal in banking and financial services, and the aforementioned COVID-19 pandemic has added disorder to what was already confusing a lot of people. This means sleeper fraudsters, especially those using third-party data, are bound to have a field day before the global contagion abates and commerce stabilizes again.

“Fraudsters employing third-party fraud apply for loans with stolen identities,” the report states. “Such schemes are much harder to detect than first-party scams because serial appliers will use fresh identities each time. This type of fraud is typically only noticed when victims contact their FIs after having noticed unusual activity in their credit histories. Competent fraudsters will be long gone at this point, forcing financial establishments to eat their losses.”

Making matters worse are synthetic identities, also tough to spot because there’s no actual person behind them to report that their identity or credentials have been stolen. But FIs, merchants and their solutions partners have powerful new tools at their disposal.

“AI and ML are especially adept at identifying synthetic fraud because traditional warning signs, like credit risk, can be unreliable for such schemes,” the report states. “Bad actors develop these identities over years and develop FICO scores and financial histories, but they often have miniscule inconsistencies, including errors that would be unnoticeable to human analysts but are dead giveaways to AI-powered platforms. ML can even detect fraudsters without being told the data points for which it should look, identifying potential fraudsters at the point of account approval and eschewing the need for extensive training periods.”

Cybercrime Police Blotter

While the March 2020 Digital Fraud Tracker® details important developments, from the pending release of blockchain-based security system Verizon Machine State Integrity to the arrest and prosecution of former Microsoft Engineer Volodymyr Kvashuk.

Kvashuk’s bold cybercrime spree was detected by Microsoft and the U.S. Department of Justice after he stole an estimated $10 million over two years. That will likely land him 20 years in prison, according to reporting in the latest Tracker.