A PYMNTS Company

In US and Europe Regulators Signal End to Hands-Off AI Oversight

 |  May 10, 2026

As governments scramble to keep pace with artificial intelligence, regulators on both sides of the Atlantic are signaling a new reality for tech companies: The era of largely hands-off oversight is ending, even if the rules themselves remain unsettled.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    This week brought two major developments that highlight how AI regulation is evolving in different directions in Europe and the United States. In Europe, lawmakers softened parts of the European Union’s landmark AI Act while adding new consumer protections around synthetic media and explicit content. In the U.S., state attorneys general are stepping up antitrust and consumer-protection scrutiny, with a growing focus on algorithmic pricing and digital transparency.

    The moves reflect a broader shift in how policymakers are thinking about AI. The debate is no longer centered only on futuristic risks. Regulators are increasingly focused on the everyday ways algorithms shape pricing, content, identity and consumer trust.

    The European Parliament and EU member states reached a provisional agreement May 7 to revise implementation timelines and obligations tied to the bloc’s sweeping AI law, according to Reuters.

    Under the agreement, several compliance obligations for so-called “high-risk” AI systems that had been scheduled to take effect in 2026 will now be delayed until Dec. 2, 2027. Supporters of the delay argued companies and regulators needed more time to build workable compliance systems.

    At the same time, the EU added tougher rules around harmful synthetic content. AI systems designed to generate non-consensual sexual imagery, sometimes referred to as “nudifier” apps, will be banned beginning Dec. 2, 2026.

    The agreement also requires watermarking of AI-generated content to help consumers distinguish synthetic media from authentic photos, videos and audio. The watermarking provision is aimed in part at slowing the spread of AI-generated misinformation and scams, both of which have become growing concerns for financial institutions, payments firms and online platforms.

    Read more: Google, Microsoft, xAI Agree to US Government Oversight of New AI Models

    Critics say the revised framework waters down portions of the original law and creates additional carve-outs for industrial and machinery applications. But the EU’s broader approach still represents one of the world’s most comprehensive efforts to regulate commercial AI deployment.

    In the United States, the regulatory push is coming less from Washington and more from state capitals.

    A separate May 7 Reuters report highlighted how state attorneys general are expanding investigations and enforcement efforts tied to antitrust and consumer protection, particularly around algorithmic pricing systems and hidden fees.

    States including California, New York and Texas have become increasingly active in examining how AI-driven pricing tools use consumer data and whether those systems could contribute to unfair pricing practices or reduced competition.

    The focus extends beyond AI chatbots or generative tools. Regulators are examining the infrastructure underneath digital commerce, including dynamic pricing engines, recommendation systems and automated decision-making software.

    For banks, fintechs and payments companies, the message is becoming clearer. Regulators are starting to view algorithms less as neutral software and more as operational systems that can directly affect consumer outcomes.

    That shift carries implications for everything from fraud prevention and credit underwriting to checkout experiences and personalized offers.

    The challenge for companies now is balancing innovation with explainability. Businesses are still racing to embed AI into products and operations, but regulators increasingly want visibility into how those systems make decisions and whether consumers understand when algorithms are shaping the experience.

    The next phase of tech regulation may not be defined by dramatic AI breakthroughs. It may instead hinge on whether companies can prove their systems are transparent, accountable and fair enough to earn public trust.