AI Puts Fraudulent Credit Card Testers To The Test

credit card fraud

This is not a test.

Beware the credit card test.

There may be no escaping credit card fraud, as it comes in one form or another. Overall, one subset of card fraud is on the rise: Last year, Radial’s eCommerce Fraud Technology Lab said credit card testing was up triple-digit percentage points on an annualized basis.

Credit card testing is a way the bad guys use stolen card data to make sure an account is valid. They make small purchases with the card — sometimes tiny buys of just a few cents here and there — before moving onto larger transactions — you know, the ill-gotten gains that cause big losses to merchants across the electronics and jewelry verticals.

With such subterfuge — tiny transactions that result in a tidal wave of losses — the key for stakeholders from banks to consumers is to have a system in place that is proactive and constantly vigilant, offering both detection and protection, stopping fraud before it can make a dent.

In an interview with PYMNTS’ Karen Webster, Akli Adjaoute, CEO at Mastercard’s Brighterion unit, said breaches are more prevalent than people think, looking only at reported data. Webster noted that last year, breaches, as noted by companies themselves, numbered 1,600, and a majority, at 71 percent, were done by hacking on other means of unauthorized access. Of that tally, 20 percent, said Webster, were from stealing credit card records.

 

But, said Adjaoute, in 2017 (and in just the financial and insurance industries alone) nearly 1,000 incidents were filed with authorities, and roughly 470 were confirmed breaches. Telecom and tech would add another 1,000, said the executive. Additionally, healthcare adds another round of breaches — at least 500.

“Even at the university levels,” he told Webster, “they are being targeted by cyber espionage in a way that we never imagined.”

All told, he said, of the breach count, “it’s not 1,600 … you are talking about 10,000 or more.” As added confirmation, the PYMNTS Global Fraud Index found account takeovers were up 45 percent year over year in the second quarter of 2017.

You can see how it all starts to add up.

Yes, “the goal is to make money when it comes to criminals,” said Adjaoute. “The idea is to get as much information, as many cards as possible, and personal data is extremely important.”

But larger campaigns are afoot, he said.

Now hackers are turning their attention to how electrical grids work, how airlines operate, how technology in general can be compromised. Incidences of those cyber espionage attempts are increasing, he said. He noted that governments in China, Russia and North Korea target the very security of the U.S.

Thus, a confluence, where governments are hiring bad guys to steal information. North Korea, for example, will target international financial institutions by acting exactly like criminals to get information from a bank to sell it on the Dark Web and make money. (North Korea, you may recall, is not exactly among the most vibrant nations economically.)

Thus, state actors are acting just as individual fraudsters might. Throw in botnets and denial-of-service attacks, and the flurry of activity is such that customer service and other departments within financial institutions have less time than they might need to verify transactions are legit. Credit card credentials can also be pilfered through malicious emails that can be rather ingenious when ensnaring the unwitting to click and share information, bringing them to fake sites that capture personal information.

The data floating out there is legion, of course, because of breaches as gargantuan as those seen from Target and Equifax. With so much information to be had, knowing who’s behind a transaction, and if they are a bad actor, boils down to monitoring behavior.

To monitor behavior, maintained Adjaoute, using artificial intelligence (AI) is crucial, because “in any breach,” he said, “time is money,” and stopping fraud in its tracks minimizes losses to issuers.

Used in real time, he continued, AI can detect “what we call abnormal behavior. And when we are talking about real time, we are talking about two milliseconds, three milliseconds.”

If there are 10 million cards with an issuer, Brighterion will have 10 million “smart agents” tied to AI that build profiles on each of those cards, he told Webster — and can be used to build similar profiles for merchants and terminals, with the goal of monitoring activity, learning what is normal and what actions might be outside the norm enough to trigger “red flags.” That can be especially germane to credit card testing, where incremental transactions by fraudsters to “try” a card can be ferreted out in real time.

Decision-making to flag, or even stop, a transaction can be made on a case-by-case basis, without resorting to universally applied rules, he said. He likened the use of smart agents to a LoJack — where, “if someone drives the car and they do not have that key that LoJack provided to me, I will get a phone call.”

In the case of the card use and the testing, said Adjaoute, “if the card is being tested when it has already been used for a year or several months … as soon as the criminal is trying to take advantage,” the system will shut them down, saving issuers as much as 98 percent of the losses they might otherwise incur.