EU Publishes Final AI Code of Practice to Guide Compliance for AI Companies

European Commission, EU

Highlights

The European Commission has published a voluntary Code of Practice to help AI companies comply with the EU AI Act, with enforcement starting in 2026 for new models and 2027 for existing ones.

The code addresses transparency, copyright and systemic risk management, offering signatories reduced compliance burdens and greater legal clarity.

AI firms such as OpenAI and Google are reviewing the code, which applies to any company whose AI systems are used in the EU and carries fines of up to 7% of an offending company’s global revenue for violations.

The European Commission said Thursday (July 10) that it published the final version of a voluntary framework designed to help artificial intelligence companies comply with the European Union’s AI Act.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The General-Purpose AI Code of Practice seeks to clarify legal obligations under the act for providers of general-purpose AI models such as ChatGPT, especially those posing systemic risks like ones that help fraudsters develop chemical and biological weapons.

    The code’s publication “marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent,” Henna Virkkunen, executive vice president for tech sovereignty, security and democracy for the commission, which is the EU’s executive arm, said in a statement.

    The code was developed by 13 independent experts after hearing from 1,000 stakeholders, which included AI developers, industry organizations, academics, civil society organizations and representatives of EU member states, according to a Thursday (July 10) press release. Observers from global public agencies also participated.

    The EU AI Act, which was approved in 2024, is the first comprehensive legal framework governing AI. It aims to ensure that AI systems used in the EU are safe and transparent, as well as respectful of fundamental human rights.

    The act classifies AI applications into risk categories — unacceptable, high, limited and minimal — and imposes obligations accordingly. Any AI company whose services are used by EU residents must comply with the act. Fines can go up to 7% of global annual revenue.

    Advertisement: Scroll to Continue

    The code is voluntary, but AI model companies who sign on will benefit from lower administrative burdens and greater legal certainty, according to the commission. The next step is for the EU’s 27 member states and the commission to endorse it.

    Read also: European Commission Says It Won’t Delay Implementation of AI Act

    Inside the Code of Practice

    The code is structured into three core chapters: Transparency; Copyright; and Safety and Security.

    The Transparency chapter includes a model documentation form, described by the commission as “a user-friendly” tool to help companies demonstrate compliance with transparency requirements.

    The Copyright chapter offers “practical solutions to meet the AI Act’s obligation to put in place a policy to comply with EU copyright law.”

    The Safety and Security chapter, aimed at the most advanced systems with systemic risk, outlines “concrete state-of-the-art practices for managing systemic risks.”

    The drafting process began with a plenary session in September 2024 and proceeded through multiple working group meetings, virtual drafting rounds and provider workshops.

    The code takes effect Aug. 2, but the commission’s AI Office will enforce the rules on new AI models after one year and on existing models after two years.

    A spokesperson for OpenAI told The Wall Street Journal that the company is reviewing the code to decide whether to sign it. A Google spokesperson said the company would also review the code.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

    Read more: