A PYMNTS Company

Deepfake Scams Put Banks at Risk of Customer Fraud Lawsuits 

 |  January 26, 2026

Deepfakes are moving from internet mischief to real-world fraud, and the damage is no longer limited to a single bad transfer or a single embarrassed executive. A convincing fake video call can trigger wire payments, change what investors think they know, or push customers into a scam that looks like it came from a trusted brand.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The hard part is that the company that gets impersonated may not be the one that created the fake. Yet it can still face lawsuits if customers lose money and argue the company did not do enough to protect them. The practical solution now taking shape is not a single silver-bullet tool. It is a set of controls, customer education and faster response plans that treat deepfakes as a routine business risk, not a rare event.

    That is the core message in a recent Sheppard Mullin post, “Corporate Fraud And Institutional Liability In The Age Of Deepfakes,” which focuses on how deepfake scams are colliding with a legal environment that is more willing to assign responsibility to institutions and platforms. The post notes that most discussions about deepfakes focus on privacy, anonymous actors and proof problems in court. It argues that a less discussed issue is growing quickly: potential liability for companies when fraud is carried out against their customers.

    The post points to a now-frequently cited case, in which a finance employee in Hong Kong joined what looked like a normal video conference with the company’s CFO and colleagues. After the call, the employee approved 15 transfers totaling nearly $25 million. The CFO and the coworkers were not real. Their faces, voices and mannerisms were generated fakes.

    Sheppard Mullin also cites research suggesting the problem is already widespread. One study found 25.9% of executives said their organizations had experienced at least one deepfake incident. Another study put the share of companies with economic losses from deepfakes far higher.

    Read more: Deepfakes-as-a-Service Creating New Fraud Risks for Enterprises

    Beyond the financial losses, the growth of deepfake fraud creates new legal risks for banks. Regulators are starting to signal that institutions with stronger resources and better access to threat information may have an affirmative duty to put in place controls that match evolving threats, according to the post. It also says courts are showing greater willingness to require safeguards that protect customers from third-party fraud, especially in financial services where companies handle transactions and sensitive customer data every day. In plain terms, the argument is shifting from “fraudsters did this” to “you were in the best position to stop it.”

    Per Sheppard Mullin, “Deepfake fraud is no longer a speculative risk but an operational reality that demands immediate, comprehensive response.”

    IT also warns that the liability conversation will not stay neatly inside banking. The post says that while financial institutions may be first in line, it is easy to imagine the theory spreading to other sectors, especially companies that facilitate online payments. And deepfakes can trigger different kinds of legal exposure. Fake videos of corporate leaders can move markets and create securities risks. Fake ads can bring consumer protection claims. Fake audio in robocalls can lead to robocall-related litigation.

    So what should companies do now, before the next deepfake hits? Sheppard Mullin advices using strong identity checks in payment portals and across vendors. Educate customers on common scam patterns and basic safety steps. Narrow the ways you communicate with customers and stick to official channels, then tell customers clearly what those channels are. Monitor for scams and respond quickly to correct false information. When possible, pursue legal action against identifiable bad actors.

    What comes next is a more demanding playbook. Deepfake defenses will start to look like a standing operating cost, not a special project, according to Sheppard Mullin. Companies should expect deeper investment, process changes and cultural change aimed at preserving trust. Those that treat deepfakes as a fundamental operating challenge, and respond with sustained effort, will be in a better position to limit liability and protect customer relationships.