To Blunt Fraud, Chase The Consumer, Not The Fraudster

fraud prevention data

Here’s a novel concept in the eternal battle against payments fraud: Chase the customer.

Not the bad guys.

As online transaction fraud spreads like wildfire, banks need to contend with the pressures of real-time risk: namely, weeding out the bad transactions from the good. The trap is the false-positive — the transaction stopped in its tracks, flagged for risk when none exists — causing friction for a perfectly good customer, who may turn away from the transaction, or even the financial institution, in disgust.

Technology can help, of course, with an eye on assisting banks in gathering data, analyzing it and acting on it as efficiently as possible. Speed matters, of course, as payments are completed faster than ever before, and the timing will only get faster. Amid it all? Reams of data, signals buried within noise, offering real-time tells in who might be good actors and who might be bad ones.

Not an easy task when there is so much noise in the digital and identity fraud realm.

In an interview with PYMNTS’ Karen Webster, David Excell, founder and chief technology officer at Featurespace, said that during his time at Cambridge University in the early part of the Millennium, the buzzwords of “Big Data” and “machine learning” were just emerging and could be traced back, in part, to a simple premise: building statistical models that could decode signals and get rid of noise — an endeavor most easily illustrated by a phone call, where the conversation must come through clearly, regardless of the ambient noise.

Said Excell, technological pursuits dovetailed with his interest in people-watching at, say, Starbucks (where else?), observing who seemed familiar with their surroundings, who seemed comfortable, who seemed at home in the world and who seemed a (tentative) tourist. Consistent observation and categorization based on a steady flow of real-time information can be extended to the concept of how fraud is understood.

For Featurespace, which offers banks and other customers its ARIC platform (short for adaptive, real-time, individual change), “the way that we approach it is that we determine what are the pieces of data that help understand ‘normal’ user activity, learn the history of what that is and use that as a mechanism” to create informed, decision-making processes.

To build a machine learning system, you need historic data, he said, “and a good understanding of what happened in the past.” An integrated solution, such as ARIC, means that machine learning will constantly evolve based on the data flowing through it.

As to what constitutes “good data,” Excell maintained that it “consists of a pool of consistent and habitual individual behavior. It allows our models to understand your consumer behavior — down to regular locations you visit, typical timings of shopping and monetary value.”

“The ARIC platform is always able to update what is known about the customer,” Excell told PYMNTS, with an eye on what would be defined as good activity. It feeds the information gathered back into the decision-making process “almost instantly … to optimize the decision-making” about customer behavior. He said the ARIC platform uses labels that are fed back into the system, either as approved transactions that end up being fraudulent, or as transactions that are declined and the issuer reaches out to the customer to verify if the transaction was theirs or not.

Using this information, ARIC constantly reweighs behavioral signals to minimize false-positives and increase the amount of fraud being captured, said the executive.

Risk scores are defined by the algorithm based on learned behavior patterns of the consumer. It grades the confidence of bad versus good behavior. The transaction is then presented to the end user (the consumer) as either an accepted or declined transaction. Depending on how the issuer has set up their fraud strategy, it could also come in the form of a text message or a call to determine the validity of the transaction, said Excell. The system, he continued, “can flag risk from changes as subtle as switching from your right hand to [your] left during mobile banking, or even if you typically shop with EMV and suddenly swipe your card at the point of sale.”

Banks and payment processors know the approach is working because they get a series of metrics and reports that are based on how much fraud they’re detecting, what the (quantitative) value of fraud is and, as Excell termed it, “more importantly, how many genuine customers are being impacted. How can we make sure that the system isn’t getting in the way of banks having a good relationship with their customers?”

Said the executive, “The customer should have no idea that there is a fraud system in the background that is there keeping their money safe.”

Featurespace, said Excell, “looks at the pattern of fraud that has taken place. And ARIC is always trying to re-identify that pattern again in the data” as it is being presented in real time. Think of a box and every time you see some transactions that fit inside that box, you are going to take another look — in short, examining consumer behavior means making an informed decision about what purchasing or online behavior seems unlikely for that consumer to make. This means being able to adjust for external events such as holidays, which may spur spikes in activity, such as buying clothes for a loved one as Valentine’s Day nears.

Behavioral analytics becomes especially important when all manner of records traditionally used to “vouch” for identities at banks — Social Security numbers and the like — have been nabbed by the bad guys and are for sale on the Dark Web. The fraudsters know everything you know, from your mother’s maiden name to your driver’s license number. Connected devices offer a dual-edged sword when it comes to authentication, said Excell, as they increase the landscape that can be attacked — “but also represent other touchpoints that we can use to understand the customer,” (i.e. how they buy and when they buy) cementing behavioral patterns that can provide assurance that transactions are indeed legit.

The payments arena might never be completely fraud-free, Excell admitted. The days of old, where repeated visits to the supermarket engendered familiarity, the person who took your payment knew your face and a bond of trust was formed, may be a distant memory. And now, he said, the question remains, “How do we bring that [personalization] back into the more online and digitized world?”

Although he cautioned that “machine learning will never be the silver bullet to end all fraudulent transactions, the aim is to make it so difficult that it acts as a deterrent to fraud.”