A PYMNTS Company

Walmart and PayPal Execs Say Prompts Could Trigger AI-Driven Coordination 

 |  December 19, 2025

As agentic AI shifts from “assistive” copilots to systems that can autonomously recommend and increasingly execute business decisions, a new antitrust risk is emerging at the level of language.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    In a Tech Policy Press Perspective, Nikhil Jain, a senior software engineer at Walmart Global Tech, and Meghana Kiran, a senior software engineer at PayPal, argue that the prompt given to an AI agent can shape market outcomes in ways regulators are not yet equipped to audit. Their warning is that compliance programs and oversight frameworks still largely evaluate code and data flows, while treating prompt engineering as an informal operational choice.

    The post situates this prompt problem in the government’s broader push to police algorithmic coordination. Jain and Kiran cite a March 2024 joint statement of interest by the Federal Trade Commission and the Department of Justice in a price-fixing case, emphasizing that companies cannot use algorithms to evade antitrust laws. That scrutiny has often been aimed at shared, third-party pricing software that can align prices across competitors without direct communication.

    According to the authors a new complexity is now entering markets: coordination can be mediated through language itself. To test that claim, they simulated a market in which prices depend on total supply and each firm chooses how much to produce. Using Google’s Gemini, two firms used LLM agents as their decision-makers to set production quantities for two products. Both agents had access to the same market history—previous prices, quantities, and trends—but no visibility into their competitor’s profit or strategy, and there was no memory exchange between them. The only variable was the prompt: one explicitly pushed aggressive competition, while the other framed the task in neutral, efficiency-oriented terms.

    Read more: PayPal Wins Dismissal of Antitrust Lawsuit Over Merchant Rules

    Under the neutral prompt, the agents quickly specialized—each taking a product—reaching a perfect market split after only two rounds. The competitive framing took five rounds to reach partial specialization.

    “The real audit shouldn’t just happen in the algorithm’s logic, but in the linguistic prompt that guides it,” the authors write. The charts included with the post visualize the gap: efficiency framing produced faster and more stable specialization, a pattern that can resemble market division even when no explicit coordination channel exists.

     The essay closes with governance proposals aimed at regulators and compliance teams. Jain and Kiran argue that efficiency wording can accelerate tacit coordination by reducing strategic uncertainty, activating learned templates from business strategy and economics where specialization is framed as profit-maximizing, and clarifying intent between agents by stripping away competitive “noise.”

    One estimate cited in the post suggests generative AI is used by more than 65% of organizations for at least one business function, and the authors warn that LLMs are increasingly being integrated into dynamic pricing, inventory optimization, and supply-chain decisions. They also point to the RealPage matter as a cautionary example, describing the Justice Department’s allegation that pricing software enabled landlords to share sensitive rent data and align prices across markets.

    To close what they describe as a governance gap, they propose prompt disclosure for firms using LLMs in pricing and other competitive decision systems, pre-deployment simulation testing to detect convergence under alternative prompt formulations, and updated FTC/DOJ guidance on algorithmic pricing to address prompt-mediated coordination, including when prompt similarity creates liability risk and where safe harbors for truly independent decision-making should apply.