A PYMNTS Company

Deepfakes-as-a-Service Creating New Fraud Risks for Enterprises

 |  January 21, 2026

Enterprises are confronting a new phase of fraud risk as deepfakes evolve from isolated synthetic videos into fully integrated, end-to-end attack tools that can impersonate executives, job candidates and trusted counterparties with alarming realism. According to a recent analysis by Jones Walker, what was once a reputational or content-moderation concern has become a material governance, compliance and financial risk that general counsels, compliance officers and CFOs must now address as part of core enterprise risk management.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Per Jones Walker, deepfake technology is increasingly being offered “as a service,” powered by autonomous AI systems capable of executing multi-step fraud schemes with little or no human intervention. These agentic systems can generate synthetic voices, faces and documents, coordinate interactions across email, video and messaging platforms, and adapt in real time to evade detection.

    The scale of the risk is already evident. In one widely cited incident, engineering firm Arup lost $25 million after an employee joined a video call populated entirely by deepfaked versions of the company’s CFO and senior colleagues, who convincingly authorized a series of wire transfers before the deception was uncovered. Fraud forecasters warn that such scenarios are no longer edge cases. Gartner projects that one in four job candidate profiles globally could be fake by 2028, while Deloitte estimates that generative AI-enabled fraud could drive $40 billion in U.S. losses by 2027, Jones Walker notes.

    At the same time, the legal and regulatory response to deepfakes is rapidly expanding but increasingly fragmented. Since 2022, dozens of U.S. states have enacted deepfake-specific laws addressing political manipulation, nonconsensual intimate imagery and misuse of synthetic likenesses. These statutes vary widely in scope, timing windows and penalties, creating significant compliance complexity for companies operating across multiple jurisdictions.

    Federally, the TAKE IT DOWN Act, signed into law in 2025, criminalizes the publication of nonconsensual intimate deepfakes and imposes strict takedown obligations on covered platforms. Internationally, the European Union’s AI Act will add another layer of requirements in August 2026, mandating clear disclosure and machine-readable marking of AI-generated content, with penalties that can reach millions of euros or a percentage of global turnover.

    Related: Spain Moves to Rein In AI Deepfakes With New Consent Rules

    Critically, per Jones Walker, traditional insurance coverage often fails to respond to deepfake-enabled fraud. Standard crime and fidelity policies typically exclude losses involving “voluntary parting,” meaning that when an employee knowingly authorizes a transfer, even under sophisticated impersonation, coverage may be denied. While some insurers have begun offering endorsements specifically addressing deepfake incidents, these products remain limited and sublimits are often far below the scale of potential losses.

    Against this backdrop, expectations around “reasonable” governance are beginning to crystallize, according to the analysis. Industry provenance and authentication standards such as the Coalition for Content Provenance and Authenticity (C2PA) are emerging as benchmarks that regulators and courts may look to when assessing negligence. Companies that fail to implement available detection and authentication tools may find themselves exposed not only to fraud losses, but also to litigation and enforcement risk.

    The bottom line for enterprises is that deepfake fraud can no longer be treated as a niche technology issue. It implicates vendor due diligence, internal controls over financial authorizations, incident response planning, insurance coverage and jurisdiction-specific legal compliance. As deepfakes become cheaper, faster and more convincing, organizations that do not adapt their governance frameworks may find that the next fraud attempt is not just more sophisticated, but far more costly.