Fighting Fraud With AI: Part Man, Part Machine, All Cop

In financial services, technological advances open new doors and new avenues for fraud.

Financial crime is a Hydra-headed monster, constantly finding new ways to breach defenses. From sophisticated phishing attacks to the manipulation of digital transactions, fraudsters are on the lookout for the next loophole to exploit.

The staggering amount lost to fraud and scams globally, which surpasses other forms of financial crimes, underscores the severity of the issue.

“It’s three times the rest of financial crime combined,” Robin Lee, general manager of APAC at Hawk AI, told PYMNTS.

What is increasingly prolific, Lee added, is the rise of “scam centers” where hundreds of individuals are coerced into conducting scams against neighboring countries.

These fraudulent operations often involve brutal tactics, such as human trafficking, violence and coercion, blurring the line between a variety of illicit activities and highlighting the grim reality of the financial scam landscape as well as the unfortunate trend of organized financial crime taking place at an industrialized scale.

“Asia Pacific is probably the most fragmented part of the world,” Lee said. “Someone from Australia is so different to someone from Indonesia who in turn is so different to someone from Japan, meaning that these scams come in all different types of flavors.”

Region-specific regulations are themselves fragmented, too, he added.

However, the capabilities of modern solutions that use innovations like artificial intelligence to process and understand vast quantities of data at unprecedented speeds offer a beacon of hope in an environment where transactions occur in milliseconds, and fraud patterns evolve rapidly.

Technological Solutions and the Role of AI in Fighting Financial Fraud

By harnessing the power of AI, which acts in the moment and learns from its actions, financial institutions can now detect anomalies in real time, reducing the window of opportunity for fraudsters to act.

The challenge, however, lies not in the technology itself but in the fragmented landscape of financial crime prevention. Regulatory environments differ dramatically across borders, and financial institutions often operate on a patchwork of systems and standards. This fragmentation poses a barrier to the effective implementation of advanced technologies.

The disparate nature of global financial systems means that while modern tools can enhance fraud detection within one institution or jurisdiction, achieving a cohesive, global defense network is far more complex. However, that doesn’t mean tailoring AI solutions to the specific needs of each region due to regulatory fragmentation is not an equally important goal.

He advocated for solutions that offer transparency and accountability.

Lee highlighted the role of compliance officers in integrating technology with critical thinking, describing an approach akin to “Robocop, not Terminator.” This hybrid model combines human intelligence with technological tools to effectively combat financial crime.

“When the first Robocop movie came out, the tagline was: part man, part machine, and all cop. That does a good job in summarizing the approach that we need to take, versus Terminator, which is 100% machine,” said Lee.

Despite advances in technology, fighting fraud remains a matter of the human brain against another human brain, he said, making experience and critical thinking invaluable — and irreplaceable — tools.

The Evolution of Best Practices Within a Fragmented Regulatory Landscape

Reflecting on the evolution of compliance roles, Lee underscored the shift toward technologically proficient compliance officers. He emphasized the need for continuous adaptation to stay ahead of evolving fraud typologies and regulatory requirements.

Sandboxing, a practice that allows for testing and refining anti-fraud measures in a controlled environment, has emerged as a valuable tool in this context, Lee said.

Looking ahead, Lee highlighted the escalating threat posed by sophisticated fraud tactics, including social engineering and deepfakes. As criminals continue to innovate, he stressed the need for robust verification and screening mechanisms to counter emerging threats, as well as an increase in anti-fraud consortiums, public-private partnerships, and data-sharing initiatives.

“Criminals will continue to innovate, and keeping up with that is not an easy task, … that’s why the need for more robust measures is an order, not a wish,” said Lee.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.