PYMNTS.com

#pymntsTop5

The Web Is Gaslighting AI Agents and Nobody Can Tell 

Read This

OCC Enters the Interchange Fight and Raises the Stakes

Read This

Payments Modernization Is Insurance’s Next Big Margin Engine

Read This

How Visa Is Rewiring Bank Infrastructure for the AI Era

Read This

The Web Is Gaslighting AI Agents and Nobody Can Tell 

An artificial intelligence (AI) agent finds the best price on a product and completes the purchase. It browses, selects and checks out without a human visiting a single page. Researchers say the listing it processed could have been seeded with hidden instructions, ones indistinguishable from legitimate content.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Google DeepMind has published research on a new class of threat to autonomous AI agents. Researchers called them “AI Agent Traps,” instructions hidden inside ordinary web pages that agents read as commands.

    The research covered six distinct attack categories and applies to every major model and agent architecture. Enterprises are deploying agents across procurement, finance and commerce with no standardized defenses in place.

    The Web Is No Longer Neutral Input

    The core vulnerability is architectural. It starts with a simple difference in how humans and machines read a webpage.

    When a person visits a product listing, they see the price and the description. An AI agent visiting the same page reads something different. It processes the underlying code, the hidden metadata and the scripts running in the background. Those layers are never visible on screen. Attackers are now writing to them specifically to reach agents.

    DeepMind’s first attack class is content injection. Malicious instructions are buried in the page’s code or image files, invisible to any human reviewer. The agent reads them as part of the page and acts on them. The second class is semantic manipulation. Instead of hiding commands in code, an attacker crafts product descriptions or vendor profiles worded to steer an agent’s conclusions. It exploits the same tendency to over-weight authoritative-sounding language that affects human judgment.

    Advertisement: Scroll to Continue

    Palo Alto Networks’ threat research team has documented both attack types across the web. Malicious websites are already deploying hidden instructions at scale. They use techniques that fragment or encode commands to pass automated security checks. The commands remain readable to the agent. The attack surface grows every time an agent connects to a new data source.

    From Bad Decisions to Manipulated Decisions

    The consumer purchase scenario scales directly into enterprise operations.

    A procurement agent pulling vendor pricing from a compromised supplier site may route an order to a fraudulent vendor. It does so without producing a visible error. The agent is not malfunctioning. It is following instructions it cannot identify as malicious. A customer service agent retrieving product information from a compromised page may return fabricated details. The agent then logs the interaction as resolved. In both cases, the workflow completes normally, and nothing is flagged.

    The DeepMind paper documents a case in which a single manipulated email caused an agent in Microsoft’s 365 Copilot to bypass its security classifiers. The agent then exposed its full privileged context. It handed over data it was specifically configured to protect.

    According to Anthropic, every webpage a browser agent visits is a potential attack vector. The company said that a 1% attack success rate represents significant risk at enterprise scale. Anthropic added that prompt injection is far from a solved problem, particularly as agents take more real-world actions.

    New Security Layer for Agent-Driven Workflows

    The reason this is hard to fix is the same reason agents are useful in the first place.

    AI agents are designed to ingest content from the web and act on it. They do not arrive at a page with skepticism. They read everything as input. An instruction buried in a product listing looks the same to an agent as the price and the shipping date. There is no built-in mechanism to tell the difference.

    The DeepMind researchers identified detection, attribution and adaptation as the three requirements for an effective defense. Detecting hidden instructions requires pre-ingestion scanners. Tracing which domain introduced a manipulation requires attribution infrastructure. Keeping pace with new attack techniques requires defenses that update continuously.

    The DeepMind paper called for new web standards that flag content intended for AI consumption and domain reputation systems that score site reliability for agents. It also called for adversarial training built into model development from the start. The researchers noted that many of the six attack categories currently lack standardized benchmarks. Most enterprises have no way to test whether their deployed agents would withstand them.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter

    OCC Enters the Interchange Fight and Raises the Stakes

    The Office of the Comptroller of the Currency is moving on two fronts to reshape how interchange is governed in the United States, pushing federal oversight into a fight that started in Illinois but won’t stay there.

      Get the Full Story

      Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

      yesSubscribe to our daily newsletter, PYMNTS Today.

      By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

      In March, the OCC filed an amicus brief in the Seventh Circuit appeal of Illinois’ Interchange Fee Prohibition Act, siding with bank plaintiffs and urging the court to reverse key portions of the lower court’s ruling. Last week, it advanced a regulatory track, submitting a rulemaking item on noninterest charges and fees for federal review.  The details of that rule have not yet been published. The Illinois law is slated to take effect July 1.

      Federal-State Fault Line

      The Illinois statute seeks to prohibit the collection of interchange on portions of transactions tied to taxes and gratuities, which would require merchants and processors to isolate those elements at the transaction level. The OCC’s position frames that requirement as an encroachment on powers granted to national banks under federal law.

      Because the OCC supervises national banks, its stance carries direct operational implications for issuers. The agency’s court filing emphasizes that interchange is embedded in the structure that supports lending, deposit services and transaction processing. The issue before the court, therefore, extends beyond fee mechanics and into whether a state can compel changes to how transactions are structured and priced.

      What Illinois Is Trying to Change

      The Illinois law introduces a targeted adjustment with broader consequences. By excluding taxes and tips from interchange calculations, it alters how total transaction value is defined for fee purposes. That change requires merchants to identify and transmit those components separately, and it obliges payment systems to recognize and process them as distinct elements.

      The OCC’s brief states that the Illinois statute represents “an improper and undeniable state interference with federally authorized banking powers.” It further warns that compliance would require significant operational changes across institutions, including modifications to systems that currently treat transactions as unified amounts. Elsewhere, and in terms of the economics, last week the St. Louis Fed estimated that U.S. banks collected $66 billion in interchange, or so-called “swipe,” fees in 2025.  That latest tally indicates a boost from $64 billion in 2024 and the $52 billion recorded in 2021.

      Advertisement: Scroll to Continue

      The OCC argues that focusing on which party formally sets interchange overlooks the broader relationship between issuing banks and payment networks. In its view, reliance on third-party networks does not diminish the federal authority that underpins the provision of card services.

      “The power to charge and receive a fee, not just determine its amount, are express powers of national banks that IFPA seeks to improperly confine,” the brief asserts.

      National or State-by-State Rules

      The case carries implications that extend beyond Illinois. Legislative proposals in other states such as Colorado and Delaware indicate that similar approaches to interchange could follow, raising the prospect of divergent rules across jurisdictions.

      If the Illinois framework is upheld, financial institutions would need to adapt their systems to accommodate state specific fee calculations. That would involve changes to transaction routing, data segmentation and reconciliation processes, all of which currently rely on uniform treatment across markets.

      A different outcome would find interchange governed under federal standards and network rules. In that environment, institutions would continue to process transactions with consistent fee structures regardless of location, maintaining interoperability across the payments system. The OCC’s filings contend that such uniformity supports efficiency and stability in the delivery of banking services.

      The timeline remains compressed. The appellate court is expected to take up the case before the law’s scheduled effective July 1, creating a near term decision point for the industry.

       

      Payments Modernization Is Insurance’s Next Big Margin Engine

      Watch more: Need to Know With One Inc’s Ian Drysdale

        Get the Full Story

        Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

        yesSubscribe to our daily newsletter, PYMNTS Today.

        By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

        Insurance carriers are under pressure and operating on the edge. With margins often hovering between 1% and 2%, even small inefficiencies can erase profitability. One of the biggest and most overlooked drivers of margin compression sits inside the payments function.

        According to One Inc CEO Ian Drysdale, payments modernization is no longer a technology upgrade. It is a financial strategy.

        In a conversation with PYMNTS CEO Karen Webster, Drysdale framed the issue clearly. Insurers are not losing margin only in underwriting. They are also losing it through outdated, check-based payment systems embedded in claims operations.

        The Hidden Cost Center Dragging Down Margins

        Legacy payment workflows were built for a different era, one with lower claims volume, less fraud and fewer expectations around speed. Today, they are creating operational drag at exactly the wrong time.

        Paper checks remain embedded across claims, commissions, refunds and subrogation payments. Each check carries cost, not just the $4 to $20 issuance expense cited by financial institutions, but also the downstream burden of tracking, reissuing, reconciling and managing exceptions.

        Advertisement: Scroll to Continue

        Those costs compound quickly.

        Payments often involve multiple stakeholders including policyholders, contractors, medical providers and lenders. Each requires validation. The result is friction-heavy processes that can stretch payout timelines to four to eight weeks.

        That delay is not just an operational issue. It directly impacts customer satisfaction, increases claims severity and ultimately erodes margins.

        At the same time, fraud risk is escalating. Checks can be intercepted, altered or misdirected, exposing insurers to losses that are both preventable and growing.

        Why Payments Modernization Is a Margin Strategy

        The shift to digital payments changes the economics fundamentally.

        Modern payout platforms validate recipients up front, reducing fraud exposure while enabling near-instant disbursements once claims are approved. More importantly, they remove the manual processes and administrative overhead tied to paper.

        Drysdale said the impact is measurable.

        He points to savings that can reach tens of millions of dollars annually for large carriers. In some cases, digital methods such as virtual cards can eliminate payout costs entirely for insurers, while vendors accept small fees in exchange for faster access to funds.

        This is not incremental improvement. It is structural margin expansion.

        “It’s a margin recovery strategy,” Drysdale said, noting that digitizing payments alone can add one to two percentage points back to the bottom line. That is a meaningful shift in an industry where that margin defines viability.

        From Operational Fix to Strategic Priority

        What is changing now, Drysdale said, is how insurers think about payments.

        Historically treated as a back-office function, payments are being reevaluated as a core lever of financial performance and competitive differentiation. Faster payouts improve customer experience. Lower costs improve profitability. Reduced fraud improves resilience.

        Adoption curves reflect that shift. Carriers that begin with low digital penetration often move quickly to a majority of payments processed electronically once modern infrastructure is in place.

        Artificial intelligence is also beginning to play a role, not as a sweeping transformation, but as a targeted tool for improving reporting, reconciliation and operational visibility. Adoption remains cautious, given regulatory and privacy concerns.

        Modernization Under Pressure

        The urgency is being driven by external forces insurers cannot control.

        Catastrophe losses are increasing in frequency and severity, putting pressure on claims volumes and costs. At the same time, policyholders expect payouts to move as quickly as any other digital transaction.

        Legacy systems were not built for either reality.

        That mismatch is forcing a broader rethink. If insurers cannot fully control losses or pricing, they must control costs. Payments are one of the few areas where immediate gains are achievable.

        The Bottom Line

        Payments modernization is no longer optional, and it is no longer just about efficiency.

        It is about margin expansion.

        Insurers that modernize can reduce cost, accelerate claims, limit fraud and improve customer outcomes, while reclaiming critical basis points of profitability.

        Those that do not risk watching their margins erode further under the weight of systems built for a different time.

        As Drysdale put it, the industry’s future will hinge on a single capability: resilience.

        How Visa Is Rewiring Bank Infrastructure for the AI Era

        Watch more: The Edit With Visa’s Kathleen Pierce-Gilmore

          Get the Full Story

          Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

          yesSubscribe to our daily newsletter, PYMNTS Today.

          By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

          Banks’ legacy core systems were built to prioritize stability, accuracy and scale, and they still deliver on those goals. But as banks attempt to modernize, digitize and incorporate artificial intelligence (AI) into everyday operations, those same systems are increasingly limiting what institutions can do and how fast they can do it.

          “Legacy core is an impediment to innovation,” PYMNTS CEO Karen Webster noted during a recent conversation with Kathleen Pierce-Gilmore, senior vice president and global head of issuing solutions at Visa, as part of the “The Edit” series. What has changed, Pierce-Gilmore said, is how visible and costly that constraint has become.

          AI Turns Infrastructure Into a Business and Risk Issue

          Artificial intelligence (AI) has raised the stakes for infrastructure decisions. AI depends on timely, governed access to data — something with which legacy, batch-based environments struggle.

          “If you don’t have well understood, well-managed, well-governed data, it’s going to be really hard to use AI,” Pierce-Gilmore said.

          In many banks, data is fragmented across systems, limiting real-time use. Pierce-Gilmore referred to those silos as “data prisons,” where information is trapped inside infrastructure and cannot easily be mobilized for analytics, personalization or compliance.

          Advertisement: Scroll to Continue

          That limitation affects far more than customer experience. It also shapes how banks evidence controls, monitor activity and respond to regulatory inquiries in an AI-driven environment.

          Regulatory expectations are tightening just as banks attempt to move faster. AI adds new complexity to long-standing requirements around data usage, transparency and governance.

          Pierce-Gilmore noted that regulators themselves are still working through how to oversee emerging technologies. That uncertainty makes infrastructure choices more consequential, not less. Banks must be able to explain how data flows through systems, how decisions are made and how controls are enforced, even as AI models evolve.

          The result is a compressed timeline. Banks are no longer debating whether modernization is necessary. They are deciding how to modernize without increasing risk or creating new forms of technical debt.

          “I do think there is a greater sense of urgency than ever,” Pierce-Gilmore said.

          Unlocking Data Enables Real-Time, Compliant Engagement

          Unlocking data embedded inside infrastructure allows banks to move from static processes to real-time engagement. When events can be consumed as they occur, banks can respond with timely alerts, personalized offers and more accurate risk management.

          Pierce-Gilmore offered a straightforward example: a cardholder pays down a balance and immediately sees available credit update, receives a relevant alert and is presented with a promotion tied to that action.

          “You can do that in a much more seamless and relevant way if you’ve got the data at the time that it’s happening,” she said.

          Batch processing makes those experiences difficult. Real-time event streaming makes them possible, and also auditable.

          Future-Proofing Is Now a Requirement

          Infrastructure decisions made today must accommodate technologies and regulations that do not yet exist. Pierce-Gilmore stressed that banks can no longer modernize with a fixed end state in mind.

          “We just don’t know today how data is going to be used or how additional layers of technology are going to be built out,” she said.

          That uncertainty places a premium on modularity and configurability. Systems must be flexible enough to adapt as AI use cases expand and as regulators clarify expectations, without locking banks into rigid architectures that become the next generation of legacy.

          Speeding Innovation’s Time to Market

          One of the most acute challenges banks face is time. Many institutions want to launch new products or experiences but face technology queues that stretch multiple years.

          Pierce-Gilmore described scenarios where issuers are “desperate to get a new product to market” but are blocked by core system backlogs. Waiting for full modernization is often not an option.

          Visa’s approach allows banks to move forward without ripping out existing infrastructure. Capabilities can be deployed on a standalone basis or layered alongside existing systems, enabling faster launches while deeper transformation continues in parallel.

          Why Pismo Matters More Than Ever

          Pismo sits at the center of that strategy. Its real-time event streaming and flexible issuing capabilities allow banks to modernize incrementally, aligning infrastructure changes with immediate business priorities.

          Rather than forcing banks into an all-or-nothing transformation, Pismo supports targeted progress—whether launching a new digital product, improving servicing or enabling real-time engagement.

          Crucially, Pismo also supports operational readiness. Real-time infrastructure only delivers value if banks can consume and act on events across servicing, risk and compliance functions.

          Visa’s Modular, Ecosystem-Scale Approach

          Visa’s infrastructure strategy reflects its scale across tens of thousands of issuers globally. Rather than selling isolated features, the focus is on modular components that can be combined to deliver measurable outcomes.

          “Our clients don’t want to buy features and functionality,” Pierce-Gilmore said. “They want to buy outcomes.”

          That approach allows Visa to act as what Pierce-Gilmore termed a force multiplier, applying lessons learned across markets and regulatory regimes to help banks modernize more efficiently and responsibly.

          A Clear Direction of Travel

          Legacy cores may still keep banks running, but the ability to unlock data, operate in real time and adapt to AI and regulatory change will determine which institutions can compete—and which remain constrained by the systems they rely on most.

          Pierce-Gilmore weighed in confidence. “I would put my whole life savings on the modernization of infrastructure,” she said.

          PYMNTS CEO Karen Webster is one of the world’s leading experts in payments innovation and the digital economy, advising multinational companies and sitting on boards of emerging AI, healthtech and real-time payments firms, including a non-executive director on the Sezzle board, a publicly traded BNPL provider. She founded PYMNTS.com in 2009, a top media platform covering innovation in payments, commerce and the digital economy. Webster is also the author of the NEXT newsletter and a co-founder of Market Platform Dynamics, specializing in driving and monetizing innovation across industries.