Healthcare Providers Turn to AI to Rein in Fraudsters’ 240-Hour Head Start

In the battle against healthcare fraud, where the job is to make sure cyberattackers don’t make off with sensitive data or funds, it might seem as if the deck is stacked against the good guys.

That’s because there’s a significant amount of manual labor involved when setting up fraud detection systems, Beth Griffin, vice president, security innovation, healthcare vertical cyber and intelligence at Mastercard, told Karen Webster.

“It takes about 240 hours to create and maintain some of these traditional rules-based algorithms,” she said. And all the while, fraud is evolving — which means providers are badly lagging behind their attackers. The problem is so bad that providers lose about 12 percent of their annual revenues to fraud, waste and abuse (FWA), according to the latest PYMNTS study done in collaboration with Mastercard’s Brighterion artificial intelligence (AI) organization.

See also: Data Brief: Only 12 Pct of Healthcare Firms Use AI to Fight Fraud, Waste, Abuse 

Griffin maintained that AI, in the service of detection and analysis, can cut down on the time it takes to flag real fraudulent activity, reducing false positives in the meantime. That’s not to say it will replace all rules-based algorithms that have historically been used in the market, but she noted that “the reality is that AI can replicate many of the things that rules-based algorithms have historically done — and do it much more efficiently.”

There’s a wide disconnect between recognizing the problem and, well, doing something about it.

Against that backdrop, PYMNTS research found that only 12 percent of firms are currently using AI to detect and address FWA. About half of healthcare organizations still rely on rules-based algorithms to detect FWA, a $300 billion problem (in the U.S. alone), though the use of AI has increased substantially (off a relatively low base).

Read more: Report: 100 Healthcare Execs Speak out on Using AI to Curb Fraud, Waste and Abuse 

Griffin pointed to two major hurdles: “It all comes down to trust in the technology — and cost.” It stands to reason that the major health plans were the ones — marked by deeper pockets, of course — that embraced AI in the fight against FWA. As for the AI vendors, Griffin said that in the early days, when AI vendors first attempted to work on FWA, they didn’t work with health plans in a very collaborative way.

Those vendors, she said, “brought in the data, went into their back rooms, ran their models and didn’t necessarily engage with their partners. That reduced the confidence — for a while — in the industry.”

Things are changing a bit and have evolved to a more collaborative effort, where the vendors are engaging with healthcare organizations to share results and point toward more operational efficiencies and cost savings, said Griffin. That’s involved a bit of hand-holding, she noted — especially as smaller and mid-sized clinics need to be shown the advantages in models that enlist advanced technologies.

AI is a potent weapon in the fight against fraudsters, she said, as results of the AI-driven analyses of transactions “train” the model even further, so it evolves and becomes even more accurate.

Tackling the False Positives 

Beyond the boon in catching fraudsters in the act, so to speak, Griffin noted that AI-underpinned models are also having a “huge impact” on false positives. She defined those events as claims or providers that initially look suspicious, but then prove to be legitimate upon analysis. The models get smart enough to recognize patterns of behavior in real time, reducing the frequency of false positives.

“We’re proving successful not only in reducing [the false positives], but also in driving the real goals of incremental savings and driving efficiencies,” Griffin told Webster. The impacts are indeed significant: She noted that roughly 40 percent of “alerts” under other (read: non-AI) fraud management systems are false positives. It also costs a special investigative unit about $7,500 just to open and close a case. The concern over catching fraudulent claims is so pervasive that insurers, per the PYMNTS study, have flagged or investigated 40 percent of providers’ post-payment claims and 25 percent of consumers’ post-payment claims — and that’s in the first quarter of 2021 alone.

Operational improvements come as AI model users are able to prioritize their investigations. By way of example, Griffin said that in just one state, with one Medicare plan, AI was able to identify $18 million in potential incremental savings.

As she noted, “using AI, from a compliance perspective, is a no-brainer … the technology can support the compliance and regulatory environment in a much more efficient way.” She told Webster that “as the market continues to evolve — and the efficiencies are embraced — we’ll continue to see more use of AI.”