Deepfakes, synthetic voice technology and artificial intelligence (AI)-generated images are reshaping the economics of fraud across insurance, forcing companies to deploy AI not as an efficiency tool but as a defensive one.
According to data cited by Fierce Healthcare, insurance fraud exposure is up to 20 times higher than in banking, with overall fraud expected to grow roughly 8% year over year. More troubling for insurers is the acceleration of AI-enabled fraud. Deepfake-related incidents, particularly those involving synthetic voices and identities, are projected to rise more than 160%, fueled by automated bot networks, emotionally persuasive synthetic voices and increasingly realistic image and video generation. The result is a fraud landscape where traditional red flags and manual review processes are no longer sufficient.
Synthetic Voices Add to Noise
One of the fastest-growing vectors is synthetic voice fraud. According to Fierce Healthcare, insurers saw a 19% increase in fraud tied to synthetic voice attacks in 2024, particularly in call-center interactions where verbal confirmation has historically served as a primary trust signal. Fraudsters are using AI voice-cloning tools to impersonate policyholders, providers and even internal employees, often with only seconds of source audio scraped from social media or voicemail greetings.
These attacks are difficult to detect with human ears alone. Synthetic voices can replicate tone, cadence and emotional inflection, allowing fraudsters to pressure agents into bypassing verification steps. The International Travel and Health Insurance Journal notes that emotionally tuned AI voices are increasingly effective at manipulating frontline staff, exploiting empathy and urgency to move fraudulent claims or policy changes through quickly.
Deepfakes, Disinformation, Expanding Claims Surface
Beyond voice, insurers are grappling with a surge in AI-generated images and documents. Motor insurers are seeing claims supported by fabricated accident photos, manipulated vehicle damage images and entirely synthetic crash scenes. Insurance Business reports that AI-generated images are now being actively used in motor insurance fraud, allowing fraudsters to submit convincing visual evidence without staging real accidents.
Swiss Re’s Systematic Observation of Notions Associated with Risk (SONAR) 2025 report frames this as part of a broader disinformation problem. Deepfakes and generative AI are not just tools for individual fraud attempts but enablers of coordinated fraud campaigns. Networks can mass-produce fake claims, supporting documents and multimedia evidence at a scale that overwhelms traditional investigation teams.
However, all is not doom and gloom, as the industry is fighting back with the same technology, and moving away from more traditional ways of combating fraud.
The shift to AI-powered fraud defense reflects a broader move away from rule-based detection toward probabilistic and pattern-driven systems. Rather than asking whether a claim violates a specific rule, insurers are asking whether it statistically resembles known fraud behaviors across thousands of variables.
Fighting Gen AI With Gen AI
To counter these new threats, insurers are deploying computer-vision models trained to detect artifacts specific to AI-generated imagery. The Insurance Council of Australia, working with analytics providers including EXL and Shift Technology, is building a national AI-powered fraud detection and investigations platform scheduled to launch in early 2026. The system is designed to allow insurers to identify synthetic identities, manipulated images and coordinated submission behavior across carriers, rather than treating each claim in isolation.
The platform will initially focus on motor insurance. Claims flagged by one insurer can surface related patterns relevant to others, allowing investigators to identify repeat fraud networks using shared indicators such as image artifacts, document reuse, timing anomalies and metadata similarities.
Many insurers are turning to generative models themselves to strengthen defenses. Research shows how generative adversarial networks, or GANs, can be used to simulate fraudulent behavior and improve detection accuracy. By generating synthetic fraud scenarios, insurers can train detection systems on rare but high-impact cases that traditional datasets fail to capture.
As the fraud gets more sophisticated, so does the defenses. PYMNTS Intelligence data shows that 7 in 10 financial institutions use AI and machine learning to ferret out and combat fraud, up from 66% in 2023.
In addition, platforms and payment systems are testing using AI to search for patterns before the fraud occurs. In September, global messaging service Swift teamed up with banks to test AI in preventing cross-border payments fraud.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.