A PYMNTS Company

Banking Leaders Face New AI Risk as Regulators Crack Down on Dark Patterns 

 |  January 27, 2026

A loan offer pops up in a banking app the moment a customer’s balance dips. A “limited time” banner flashes next to a credit card upgrade. The button to accept is big and bright. The button to decline is smaller, grayed out or buried behind a second screen. None of this looks like fraud. But it can still push people into choices they did not truly mean to make.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    That is the new risk zone for banks and fintechs using AI. As personalization gets smarter, the line between “helpful” and “manipulative” gets thinner. And regulators are paying closer attention to how digital choices are designed, not just what the fine print says.

    A new post from Melento, an AI-native collaborative intelligence platform, lays out why this matters now for executives who oversee product, compliance and risk. The piece argues that consumer protection concerns are shifting “from traditional fraud and misrepresentation to the very architecture of digital choice,” especially as AI tailors pricing, offers and prompts to individual behavior. That shift is not abstract. The post points to Federal Trade Commission research finding that 67% of popular websites and apps used by consumers employ at least one dark pattern—design tactics that steer users toward spending more, sharing more data or staying subscribed longer than they intended.

    In plain terms, “dark patterns” are tricks built into screens and flows. Think hidden fees that appear late in checkout. Think subscription cancellation paths that feel like a maze. Think consent prompts designed to wear people down until they click “accept.” The Melento post describes how these tactics can cause real financial harm through “hidden charges, forced purchases, or prolonged subscriptions.” For banks, the same playbook can show up in overdraft prompts, credit offers, “one-click” add-ons, or disclosures that are technically present but practically hard to find.

    AI raises the stakes because it can personalize pressure. A nudge can be timed to when a customer is tired, anxious, or most likely to say yes. Pricing and offers can shift from person to person without a clear explanation. And “urgency cues” can be tested and tuned automatically to drive conversion. That can turn a basic product flow into something regulators see as unfair, even if no one intended harm.

    As the post puts it: “Dark patterns are no longer just a design or ethical concern; they are illegal in many jurisdictions and subject to enforcement action.” That legal pressure is building on multiple fronts. In Europe, the Digital Services Act prohibits interfaces that impair informed choice, and policymakers are also discussing a proposed Digital Fairness Act expected in 2026 that would strengthen limits on manipulative personalization and certain AI-driven nudges. In the U.S., the post notes that California’s privacy law treats consent obtained through dark patterns as invalid, while the FTC frames manipulative design as an unfair or deceptive practice. India’s consumer authority has also pushed e-commerce platforms to self-audit and remove dark patterns, with formal notices to major firms.

     So what should banking leaders expect next?

    First, more scrutiny of where AI touches money decisions: personalized pricing, late-added fees, confusing opt-outs, and automated “nudges” that target vulnerable customers. Banks do not need to guess where to look. The post flags common danger zones, including drip pricing (extra costs shown late), consent flows that create “consent fatigue,” and AI-driven prompts that can “cross the line into coercive or manipulative nudges.”

    Second, regulators will want proof, not promises. The Melento post emphasizes documented compliance, including audits and evidence that product teams designed for transparency and real choice.

    Third, the fix is not a single policy memo. The post’s “compliance playbook” points to practical steps: clear fee disclosures early, opt-in and opt-out options that are easy to find, explanations for why certain offers appear, automated checks to detect risky patterns, and tighter coordination between legal, product and design teams.

    The message for banks is simple. If AI is shaping the screen, it is shaping the risk. And the safest path is to make sure customers can see, understand and control what they are agreeing to—before regulators decide the interface made the choice for them.