Vibe Coding Breaks Into Banking Before Regulators Can React 

AI Vibe coding

Highlights

AI coding tools are letting nontechnical teams build and ship production software, reducing dependence on traditional engineering teams.

In FinTech, the bigger challenge is governing AI-generated software under strict compliance, security and regulatory requirements.

Despite risks, AI adoption is accelerating as firms like OpenAI and Anthropic push enterprise AI tools and companies race to overcome organizational barriers to adoption.

One of enterprise technology’s oldest assumptions just broke. Programming, long treated as a scarce and tightly controlled skill, is no longer reserved for engineers.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    AI systems from leading companies can now generate production-grade code, debug applications, write tests and integrate APIs with minimal human intervention. Business units that once waited months for engineering support are building solutions themselves.

    The shift is already reshaping headcount decisions at major FinTechs. When Coinbase announced a 14% headcount reduction Tuesday (May 5), CEO Brian Armstrong pointed directly at the change. “Nontechnical teams are now shipping production code,” adding that the company plans to experiment with “reduced pod sizes, including ‘one person teams’ with engineers, designers and product managers all in one role.”

    For financial services, the stakes are different than in any other industry. A broken feature in a social app frustrates users. A broken feature in a payments system can trigger fraud, compliance violations or systemic risk.

    See also: Vibe Coding Comes to Finance as CFOs Embrace Conversational AI 

    When Working Code Isn’t Enough

    The question today is no longer whether nonengineers can produce usable software. They can. And if code, once put into production, works and keeps working, then those nontechnical teams are already becoming, in effect, technical teams thanks to AI.

    Advertisement: Scroll to Continue

    Roughly 11% of live updates to Uber’s back-end systems are now written by AI agents, up from a fraction of a percent three months earlier, according to PYMNTS.

    The more relevant issue may be whether financial services firms can effectively govern that software once it enters production environments governed by banking regulations, anti-money laundering requirements, cybersecurity rules and consumer protection obligations.

    Financial services, after all, operate differently than consumer-facing platforms. Software in FinTech and banking is not merely a productivity layer; it is part of the regulated infrastructure itself. Every system touching customer funds, transaction monitoring, lending decisions, or identity verification can create legal and operational liabilities.

    A compliance analyst using an AI assistant to create an internal monitoring tool may successfully generate functioning software. The interface may work flawlessly. The outputs may even appear accurate. But readiness in FinTech involves far more than performance.

    Software-development life cycles, peer review systems, security testing and infrastructure governance exist because modern financial institutions are expected to demonstrate accountability over their technology stacks.

    Vibe coding threatens to potentially decentralize software creation faster than governance frameworks can adapt.

    The risk is not necessarily that nontechnical employees will create bad systems. In many cases, AI-generated software may outperform hastily written human code. The larger concern is fragmentation. Hundreds of employees independently creating semi-autonomous tools could produce sprawling internal architectures with inconsistent controls, undocumented dependencies and unclear ownership structures.

    See also: Tech Giants Just Made Every Business Their Business 

    Reinvention of the Technical Team

    Despite the risks, the broader direction of travel appears difficult to reverse. Financial firms are moving from a world where relatively few people could create production systems to one where potentially everyone can.

    On Friday (May 1), Federal Reserve Vice Chair for Supervision Michelle Bowman warned that AI capabilities are advancing quickly enough to require updated supervisory approaches. PYMNTS Intelligence data provides a snapshot of banking’s embrace of AI, where, for example, 73% of top-performing credit unions are developing new payment features with external partners.

    PYMNTS also covered recently how OpenAI on Monday raised $4 billion for a venture, known as The Deployment Company, designed to get businesses to adopt its AI tools. Reportedly, partners for OpenAI’s new joint venture will get access to more than 2,000 portfolio companies and clients.

    Not to be outdone, Anthropic also on Monday launched its own new venture focused on selling AI tools to enterprise companies. Like OpenAI’s, the Anthropic initiative will help companies embed Anthropic’s Claude AI model into their businesses. And on Tuesday (May 5), Anthropic separately launched 10 new financial services-focused artificial intelligence agents.

    The reason for the enterprise and finance function land grab by major AI labs? It could be that organizational readiness is the most cited barrier to AI adoption at large companies. More than 71% of executives at companies with at least $1 billion in yearly revenue named it as the chief limit on AI performance, according to separate research by PYMNTS Intelligence.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.