March 2026
What’s Next in Payments

Why Identity Silos Are Failing in the AI Era

As generative AI turns faces, voices, documents and even “normal” behavior into programmable fraud tools, 10 executives across payments, identity and fraud are converging on the same conclusion: digital trust can no longer depend on a single checkpoint.

Get Unlimited Access
Complete the form below for free, unlimited access to all our Data Studies, Trackers, and PYMNTS Intelligence reports.

Thank you for registering. Please confirm your email to view all our Trackers.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The advance of artificial intelligence is throwing the one-time stability of identity management innovation into disarray.

    And while companies used to be able to verify a customer at onboarding, authenticate them again at login, perhaps challenge a transaction if something looked unusual, then move on, that’s increasingly no longer the case.

    In conversations with PYMNTS for the March edition of the “What’s Next In Payments” series, “How Will AI Change Identity?” executives across payments, identity and fraud each converged around the same four conclusions:

    The immediate problem is not just deepfakes, also known as voice cloning, or synthetic IDs in isolation. It’s the collapse of confidence in traditional trust signals altogether. In an AI-mediated environment, fraudsters can now fabricate not only faces and voices, but also documents, device signals and, increasingly, the ordinary rhythms of human behavior itself.

    This, as the experts stressed, means that digital trust must become continuous, contextual and built for a world where software agents may soon transact alongside humans.

    The first generation of digital identity was document-centric and event-based. Verify a person, open the account, authenticate at login, maybe challenge at checkout. The second generation introduced more signals, more automation and more risk scoring, but still relied heavily on static ideas of identity and human distinction.

    The third generation, now coming into view, is different. It assumes that any individual signal can be spoofed, that humans and machines will increasingly blur in digital channels, that agentic transactions will become the norm and that organizations must continuously score trust rather than episodically.

    Identity Is Shifting From a One-Time Check to Continuous Trust

    AI is forcing payments and identity leaders to rebuild digital trust from static verification to continuous, contextual, permissioned trust.

    Or, as Veriff Chief Technology Officer Hubert Behaghel told PYMNTS, identity systems need to answer three questions continuously: “Are you who you say you are? Can you be trusted? And are you still the same person related to the account?”

    The implication is not merely that one or two controls are weakening. It is that businesses can no longer assume any single signal, especially one that looks plausibly human, is enough on its own.

    “What we’re seeing right now is AI breaking identity at remote trust moments,” Matthew Pearce, vice president of fraud risk management and dispute operations at i2c, told PYMNTS, pointing to onboarding, account access and call center interactions in which institutions must verify users they never meet.

    “With synthetic voices now, bad actors can scrape voices off the internet via social media channels, take a snippet of that voice and then create an entire script based on that person’s voice,” explained Elizabeth Wadsworth, vice president of Decision Intelligence and Transformation at Velera.

    “We’re in a space where it’s hitting all sides,” she added.

    And being hit on all sides requires a more expansive idea of identity, one that is less a one-time credential check and more a living confidence score built from behavior, context, intent, device integrity and transaction-level permissions over time.

    “We’re moving beyond an era where I’m looking to stop ‘one thing,’ and instead I’m looking for behaviors. It’s how people interact with you that is becoming more relevant,” said Richard Swales, chief risk and compliance officer at Paysafe.

    The Real Threat Is Not Just Deepfakes—It’s “Fake Normal”

    The throughline is clear: the financial services industry urgently needs to move from event-based authentication to persistent, contextual confidence. But what’s behind this shift?

    The 10 experts PYMNTS spoke with all collectively point the finger at AI’s ability to imitate legitimate customer behavior at scale, exposing the weakness of point solutions and siloed defenses.

    “If you start to have bots that don’t look like bots and actually do look like humans, they’re typing with human-like cadence, maintaining long live sessions, that’s where the real fraud starts to come in,” Tim Joslyn, chief technology officer at Paymentology, told PYMNTS.

    “Fake normal behavior worries me the most,” he added.

    Multiple executives point to bots and synthetic identities that can mimic typing rhythm, browsing patterns, session behavior and transaction flows well enough to appear legitimate as a more consequential threat than synthetic media alone.

    “If a human can do it, we are now at a stage where the machines can do it in plausible ways,” Adam Hiatt, vice president of fraud strategy at Spreedly, said. “It’s an arms race.”

    “The one that we’re spending time looking at, and it’s probably harder to detect, is fake behavior,” James Mirfin, senior vice president, Global Head of Risk and Security Intelligence Solutions at Visa, explained.

    “If you are sitting in a café or restaurant, people’s behavior typically is fairly similar,” Mirfin added. “But you can spot someone that looks a bit nervous or twitchy. It’s the same in banking. Historically, bank tellers were looking for anomalous behavior.”

    In the digital world, those anomalous signals can come from, amongst other things, transaction patterns, device data and location information.

    Zac Cohen, chief product officer at Trulioo, summarized the shift: “The biggest change companies can make” is moving “from point-in-time verification to continuous, contextual trust.”

    Security Architecture Is Becoming Multilayered, Signal-Rich and Risk-Based

    Authentication is no longer a gate. It is becoming a stream. That’s the architecture-level change behind nearly every expert recommendation.

    “Fraud operates at machine speed,” said i2c’s Pearce. “Identity has to operate at machine speed also.”

    Once identity is reframed as a continuous assessment rather than a checkpoint, the technology stack naturally changes too.

    “The challenge is really instrumenting the full life cycle of the customer with strong identity and authentication,” said Veriff’s Behaghel. “When all of that data lives together, you can think in terms of thousands or tens of thousands of signals. That’s a completely different game.”

    Trulioo’s Cohen made the same case from the opposite angle, warning against fragmented defenses and noting that “point solutions will always fail against a multidimensional attack.”

    Evaluated separately, each piece of a synthetic identity may appear plausible. But evaluated together, inconsistencies may surface.

    “You can’t just focus on account creation or setup,” Visa’s Mirfin said. “It’s about identifying good behavior and good activity over time.”

    The new anti-fraud operating model increasingly relies on multilayered defenses that combine device intelligence, network signals, behavioral analytics, biometrics, transaction context and real-time risk scoring.

    “There’s a hyper focus in the industry right now on measuring and detecting anomalies in behavior and data patterns that really ensure even the most sophisticated synthetic identities are flagged and checked before any basic verification process takes place,” Kevin Ostrander, chief revenue officer at digital insurance platform One Inc, explained.

    Just as important, companies should adaptively apply anti-fraud controls, with low-friction flows for low-risk activity and step-up authentication only when uncertainty rises. The winning model is emerging as one of precision, with firms correlating more signals across more moments, with faster decisioning and fewer organizational silos.

    Tokenization and Agent Authorization Are Becoming Foundational

    Threats are evolving quickly, and the line between genuine automation and malicious automation is only going to get harder to draw. But the broad direction for financial services and payments is unmistakable. In an AI-shaped economy, firms can no longer build digital trust around one credential, one gate or one moment.

    “Checks have been centered around determining if activity is machine-driven or human-driven,” Christine Hurtubise, vice president of artificial intelligence and machine learning at FIS, said. “That paradigm is shifting as agents are able to replicate human activities.

    “Pushing forward the ability for authorized agents to securely interact with payments and some of our end systems would be a big opening in the industry,” Hurtubise added.

    After all, when building the control layer for agentic commerce, the critical question becomes not only “Who are you?” but also “Who authorized this agent, for what and within what limits?”

    “If you go back 18 months or two years, most merchants would say: ‘Bot? Stop,” Visa’s Mirfin said. “Now you’ve got good bot behavior interacting with your website, and that changes the game. … If a consumer chooses to use an agent to shop for them, the merchant needs to be ready to accept that and recognize that interaction.”

    Many experts pointed to tokenization as a potential way to reduce exposure of raw credentials and PII, as well as to embed more programmatic control into agentic transactions.

    “Using a token is like giving someone a $5 bill. Not using tokenization is like giving them your card, your PIN number and access to your entire bank account,” Paymentology’s Joslyn said. “When agents are tightly scoped and using tokens with controls on what they can do, it becomes much easier to trust them.”

    About

    PYMNTS Intelligence is a leading global data and analytics platform that uses proprietary data and methods to provide actionable insights on what’s now and what’s next in payments, commerce and the digital economy. Its team of data scientists includes leading economists, econometricians, survey experts, financial analysts and marketing scientists with deep experience in the application of data to the issues that define the future of the digital transformation of the global economy. This multi-lingual team has conducted original data collection and analysis in more than three dozen global markets for some of the world’s leading publicly traded and privately held firms.

    We are interested in your feedback on this report. If you have questions or comments, or if you would like to subscribe to this report, please email us at feedback@pymnts.com.

    Disclaimer

    The What’s Next in Payments Series may be updated periodically. While reasonable efforts are made to keep the content accurate and up to date, PYMNTS MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, REGARDING THE CORRECTNESS, ACCURACY, COMPLETENESS, ADEQUACY, OR RELIABILITY OF OR THE USE OF OR RESULTS THAT MAY BE GENERATED FROM THE USE OF THE INFORMATION OR THAT THE CONTENT WILL SATISFY YOUR REQUIREMENTS OR EXPECTATIONS. THE CONTENT IS PROVIDED “AS IS” AND ON AN “AS AVAILABLE” BASIS. YOU EXPRESSLY AGREE THAT YOUR USE OF THE CONTENT IS AT YOUR SOLE RISK. PYMNTS SHALL HAVE NO LIABILITY FOR ANY INTERRUPTIONS IN THE CONTENT THAT IS PROVIDED AND DISCLAIMS ALL WARRANTIES WITH REGARD TO THE CONTENT, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT AND TITLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF CERTAIN WARRANTIES, AND, IN SUCH CASES, THE STATED EX CLUSIONS DO NOT APPLY. PYMNTS RESERVES THE RIGHT AND SHOULD NOT BE LIABLE SHOULD IT EXERCISE ITS RIGHT TO MODIFY, INTERRUPT, OR DISCONTINUE THE AVAILABILITY OF THE CONTENT OR ANY COMPONENT OF IT WITH OR WITHOUT NOTICE.
    PYMNTS SHALL NOT BE LIABLE FOR ANY DAMAGES WHATSOEVER, AND, IN PARTICULAR, SHALL NOT BE LIABLE FOR ANY SPECIAL, INDIRECT, CONSEQUENTIAL, OR INCIDENTAL DAM AGES, OR DAMAGES FOR LOST PROFITS, LOSS OF REVENUE, OR LOSS OF USE, ARISING OUT OF OR RELATED TO THE CONTENT, WHETHER SUCH DAMAGES ARISE IN CONTRACT, NEGLIGENCE, TORT, UNDER STATUTE, IN EQUITY, AT LAW, OR OTHERWISE, EVEN IF PYMNTS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
    SOME JURISDICTIONS DO NOT ALLOW FOR THE LIMITATION OR EXCLUSION OF LIABILITY FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES, AND IN SUCH CASES SOME OF THE ABOVE LIMITATIONS DO NOT APPLY. THE ABOVE DISCLAIMERS AND LIMITATIONS ARE PROVIDED BY PYMNTS AND ITS PARENTS, AFFILIATED AND RELATED COMPANIES, CONTRACTORS, AND SPONSORS, AND EACH OF ITS RESPECTIVE DIRECTORS, OFFICERS, MEMBERS, EMPLOYEES, AGENTS, CONTENT COMPONENT PROVIDERS, LICENSORS, AND ADVISERS.
    Components of the content original to and the compilation produced by PYMNTS is the property of PYMNTS and cannot be reproduced without its prior written permission.