A PYMNTS Company

EU’s AI Act Brings Antitrust Scrutiny to the Heart of Artificial Intelligence Governance

 |  October 28, 2025

The European Union has taken a decisive step toward redefining how artificial intelligence is regulated worldwide by adopting the Artificial Intelligence Act, a sweeping framework that not only governs AI ethics and safety but also signals a bold new phase in antitrust enforcement. According to Bloomberg, this legislation represents the first comprehensive legal architecture for AI, one that extends its reach far beyond Europe’s borders and sets a new standard for global technology governance.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The AI Act, formally adopted in 2024, introduces a risk-based regulatory system that classifies AI tools by their potential societal impact. While rules prohibiting “unacceptable risk” systems took effect in early 2025, most provisions will not become fully enforceable until August 2026. Per Bloomberg, the staggered rollout reflects the EU’s attempt to balance rapid technological innovation with the need for accountability and market fairness. High-risk systems integrated into safety-critical products face an additional year, with obligations extending to August 2027.

    What makes this framework particularly consequential, according to Bloomberg, is its intersection with competition law. The EU’s approach not only targets unsafe or unethical uses of AI but also aims to prevent market concentration and dominance by large tech firms controlling foundational AI models. By layering strict compliance duties—ranging from transparency reporting to risk assessments—the bloc is effectively embedding antitrust considerations into technical regulation. This could force major AI developers to open their systems to scrutiny and limit practices that stifle smaller competitors.

    Read more: With Congress Still MIA on AI, State Legislators Expand Their Efforts at Regulation

    The legislation’s global implications are vast. Any company offering AI in the EU market must comply, regardless of its country of origin. Failure to do so could trigger substantial fines, signaling that compliance has become the price of entry to one of the world’s largest economies. According to Bloomberg, such measures are likely to influence how AI innovation is documented, patented, and monetized, with corporate legal teams already reevaluating intellectual property strategies to align with the new regulatory landscape.

    Under the AI Act, four categories define risk exposure: minimal-risk systems like AI video games and spam filters, limited-risk systems such as chatbots, high-risk systems that affect employment or healthcare, and general-purpose or foundation models that underpin multiple applications. While minimal and limited-risk tools face light oversight, the high-risk and general-purpose classifications carry heavy compliance burdens, including disclosure obligations and human oversight requirements.

    The broader effect, per Bloomberg, is a convergence of AI policy and competition enforcement—where transparency rules, data provenance documentation, and model accountability not only serve consumer protection goals but also function as antitrust guardrails.

    Source: Bloomberg