Brighterion On Unlocking The Full Potential Of AI’s Present And … Future For FIs

The real power of AI, Mastercard SVP and Head of Brighterion, Sudhir Jha told Karen Webster, is in offering solutions that dynamically respond to risks as they appear, without inserting stumble steps that slow down the customer’s front-end security.

While artificial intelligence (AI) has been referred to as a lifesaver many times on these pages, usually it is a figure of speech.

But in the case of a new machine-learning technique written up in a study published in the journal Radiology, the phrase is intended quite literally. When fed data from cardiac disease patients over a nearly decade-long interval, the AI did better than standard diagnostic tools and monitoring in both assessing the risk that a patient would die from heart disease and in pushing life-saving medication before a potentially fatal cardiac event.

Sudhir Jha, Mastercard senior vice president and head of Brighterion, told Karen Webster in a recent conversation that the study shows AI is a powerful tool when it comes to fighting risk — both in the ways we typically think of in the world of payments and commerce and in the much, much wider definitions that exist across verticals like retail or medicine. And that AI’s potential, wherever it is applied, is going beyond the constraints of the systems and tools of the past — to a future that manages to both have fewer risks and create less friction for the consumer.

In the world of financial services, where Brighterion does most of its work currently, escaping the systems of the past means getting beyond rule-based systems that have the dual downsides of rejecting too many good transactions while still allowing too many bad ones to go through. The future, and what Brighterion has focused its development on, Jha said, is building and leveraging AI solutions that can respond dynamically to risk factors as they unfold, without necessarily inserting a host of additional stumble steps that slow down the customer’s front-end security.

The progress, he noted, has been steady in the last several years, and in the last 18 months, it has even been picking up steam. And while the headline use cases have been in the world of fraud and compliance protections, the reality, he said, is that the application of AI tools is expanding.

“As the evolution of payments is ongoing, and more elements are being tied in,” he said. “We are realizing that these tools can also be used for things like customer retention — not only in the financial services industry but other industries as well.”

The Growing Playing Field

When it comes to the application of AI, particularly in financial service contexts, fraud and compliance risks are the top two areas that spring to mind because, in many ways, they are the headline makers. When something breaks down, and the bad guys get through, Jha said, it never fails to be a top-of-the-fold news event.

But when talking about risk and risk management in the payments game, Webster and Jha noted, the picture is actually a lot wider. Payments is in some sense a risk management business end to end. There is also credit risk, regulatory risk and retention risk, not to mention “a host of other dimension of risk that AI can really shed light on,” Jha said.

Tapping into that capacity, however, has some requirements.

First, he said, data has to flow freely between different centers within a financial institution (FI) so that the AI is working with all available data when it is drawing conclusions. That is often easier said than done in the heavily-siloed world of legacy financial services players because there are concerns about data privacy issues and security when it comes to moving data through an organization, or handing it off to a third-party player like Brighterion to feed through its AI tools.

The good news for Brighterion, he said, is that as part of the Mastercard ecosystem it has an easier than average time winning that trust with FIs and actually having the full data sets needed sent directly to work within its own secure systems.

One of the perks of the extraordinarily high data-handling standard that exists within the Mastercard ecosystem, he said, is it tends to engender trust when it comes to sharing data. But from previous professional engagements, he knows that handing out that data is not always a given, and even Brighterion will deal with highly risk-averse clients who will only let their data be touched in-house.

Really tapping AI tools and turning them on for partner organizations, he said, means sometimes having to set up within their systems instead of having the data sent.

“And again, it is all different levels of risk involved depending on what type of data it is,” he said. “This is really why a lot of the new AI firms are at a disadvantage — they don’t have access to the full data troves they need to build insights off of.”

The other big requirement of tapping into capacity adequately, Jha told Webster, is realizing the scale and scope of the problem often looks a bit different than it perhaps appears on the surface. Take, for example, anti-money laundering (AML) and know your customer (KYC) fraud — or just good old-fashioned cybercrime. Most of the focus, he noted, tends to fall to attackers who come from the outside looking to break in. What there has been less focus on, because it is less common, is when the attacks and violations are actually coming from within the organization.

Often, he said, these two vectors aren’t totally separate — many external attacks require the involvement of an internally-planted aide. And although these attacks are less common, when they happen, they tend to happen big.

AI, he said, tends to spot these things, because unlike rule-based systems, AI is good at spotting nuances and variations on frauds that haven’t existed before — and thus haven’t been programmed into a rule.

That’s why, he said, for the AI systems of the future to really flourish, the rules systems of the past have to go.

Breaking Up With the Rules

While on some level it is becoming increasingly common knowledge that old-school, rule-based systems aren’t an adequate response to a digital security environment that is highly dynamic, they tend to hang around.

Not because the FIs that developed those systems are sentimentally attached to them, but because they have spent years feeding them information about situations that raise fraud flags. The fear among many executives when they are considering making the jump to an AI system that dynamically responds to threats, is the idea that perhaps something their old rules would have caught will manage to slip by the AI goalie.

The solution, Jha said, isn’t to fight with them about it; it is simply to say that this does not have to be an either/or — it can be a both/and. They don’t have to unplug their rule-based system, they can instead use it in parallel with Brighterion’s system. And over time, they can document which system is catching which issues.

What most firms tend to see within six months to a year, he noted, is that there are almost no genuine fraud cases that are caught by the rule-based system that are not also caught by the AI system. The rule-based system misses real fraud that the AI system catches, but the “misses” the AI system logs aren’t actual fraud cases. They are false positives that the rule-based system flags and impacts good customers.

Once they see they aren’t really experiencing holes in the system, they tend to unplug the rule-based system that isn’t doing its job. Someday soon, Jha said, Brighterion will be able to skip the parallel tests, and be able to instead guarantee that the entirety of the rule-based system’s data based on instructions will be built into the AI system that replaces it.

It will never repel 100 percent of frauds, Jha said. The only systems that can promise that would have to be set to repel 100 percent of transactions — which solves the fraud problem but in a less than optimal way.

What AI can do, and is doing, Jha noted, is set the dial so that the vast majority of fraud is locked out, but the consumer never sees it, never feels it — and never flees a transaction because of it.