Cybersecurity is a constant and pressing concern for financial institutions of all types and sizes, ranging from small community credit unions to multibillion-dollar international banking conglomerates. The World Economic Forum estimates that more than $1 trillion is lost to financial crimes annually, with fraudsters deploying a number of techniques ranging from phishing scams to identity theft to sophisticated botnets. Cybercrime and fraud problems have only been on the upswing in recent years, with one study finding that fraud increased by 30 percent in Q3 2019 alone, with one in every five account openings revealed to be fraudulent.
Fighting this fraud will require not only diligence on the part of bank staff and customers but also an array of advanced technology. Some of the most effective innovations are authentication tools ensuring that customers are legitimate and artificial intelligence (AI)-based systems that sniff out fraudsters who manage to make it into bank systems. The following Deep Dive explores how these two technologies can drastically reduce bank fraud rates without unduly burdening legitimate customers.
The first step to securing customer accounts at digital-first banks involves customer authentication to ensure users are who they say they are and not fraudsters attempting to breach accounts with stolen identities. Banks largely use passwords, PIN numbers and other forms of knowledge-based identification, with a study by PYMNTS finding that passwords are the most common authentication method used by financial services, eCommerce and healthcare companies. This method is not the most secure, however, in large part due to poor password hygiene on the part of bank customers.
A recent study from data analytics firm FICO found only 37 percent of bank customers in Canada use separate passwords for different accounts, for example, and 22 percent use two to five passwords among all their online profiles. This represents a massive security risk, as a data breach at any one of these accounts could give fraudsters access to any other account that uses the same password.
Banks are instead turning to more secure forms of authentication for their customers’ accounts. One of the most common is multifactor authentication (MFA), which relies on input from users besides their passwords, such as numeric codes texted to their mobile devices. These authentication methods can stop potential bad actors cold, as the passwords they steal from data breaches are useless on their own. Studies have shown that using MFA can prevent more than 99.9 percent of attacks that rely on stolen credentials, making such solutions an imposing obstacle for hackers armed with pilfered passwords.
One of the problems with MFA, however, is that it requires extra work on the customer’s part — a tall order when many customers value seamlessness over security. Some banks are turning to biometric authentication methods instead, like a selfie taken on a customer’s smartphone. The bank’s system compares the submitted selfie to a 3D facial-recognition map that the user first uploaded when they made their account to ensure their identity.
Fraudsters have been known to spoof facial-recognition systems by using photos or videos of legitimate users, however, with some systems requiring a likeness certification test to confirm that it is actually the user. The system could request that the user smile or wink during the verification process, for example.
No authentication system is 100 percent effective, but systems that harness AI have shown the most promise in this capacity.
Leveraging AI To Hunt Down Fraudsters
Banks have traditionally used human analysts to study transactions for signs of fraud or other cybercrime, such as unusually high sums, false information on credit card or loan applications or other signs that something could be amiss. The problem with relying on human analysts is that the sheer quantity of transactions banks process every hour makes it impossible for even a large fraud prevention team to keep up.
Some banks rely on static rules to ease the burden on their analysts, conducting manual reviews only for transactions that have certain red flags of potential fraud. Yet not only are these rules ineffective — with 45 percent of companies using them reporting that they don’t successfully prevent fraud — they also have very high false positive rates. Sixty percent of companies said they had accidentally blocked legitimate customers with their static rules, and another 60 percent stated that even customers who were not blocked faced friction in the review process.
AI can solve both these problems, as it can take thousands of different data points into account and assign transactions a likelihood of fraudulence rather than outright blocking based on individual variables. These systems can also compare legitimate transactions to known fraudulent ones and learn the differences between them, refining their algorithms and applying these lessons toward future transactions. They can do all of this in mere fractions of a second, reducing the burden on human analysts and accelerating the review processes of legitimate transactions so customers can be approved faster. Banks using AI have found their fraud detection rates have improved by as much as half, while their false positive rates have declined by more than 60 percent.
The most effective fraud prevention systems use multiple layers of protection, with ironclad authentication at the point of entry and AI systems to handle bad actors that make it past that point. No fraud system by itself is perfect, but a multisystem approach can drastically reduce the threat of fraud.