A PYMNTS Company

European AI Law to Prioritize Openness, Copyright Protection and Model Safety

 |  July 10, 2025

The European Commission has introduced a new voluntary code of practice aimed at guiding companies through the complex regulatory framework of the EU’s pioneering Artificial Intelligence (AI) Act. The code is designed to support compliance with key aspects of the law, emphasizing transparency, copyright obligations, and safeguards related to safety and security, according to Reuters.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The code was developed by a panel of 13 independent experts and forms part of the EU’s broader ambition to establish a global benchmark for AI governance. While adherence to the code is not mandatory, the European Commission noted that companies choosing not to participate will not benefit from the legal clarity afforded to those who do, per Reuters.

    This latest move comes ahead of the phased implementation of the EU’s AI Act, which officially came into force in June 2024. The regulation imposes tiered requirements based on the risk profile of AI systems, with the strictest rules reserved for applications deemed high-risk. General-purpose AI models, such as those powering widely used chatbots and language generators, are subject to more moderate obligations.

    Read more: Federal Judge Sides with Meta in Authors’ AI Copyright Lawsuit

    Beginning August 2, 2025, compliance will become mandatory for new general-purpose AI models released to the market. Existing models will have until August 2, 2027, to align with the legislation. The guidance on transparency and copyright will be applicable across all providers of general-purpose AI, while directives on safety and security will specifically target developers of advanced systems like OpenAI’s ChatGPT, Google’s Gemini, Meta’s Llama, and Anthropic’s Claude.

    The code’s final approval still hinges on formal endorsement by EU member states and the European Commission, a step that is anticipated by the end of 2025.

    EU digital policy head Henna Virkkunen urged providers to take part, describing the code as a practical and collaborative tool for navigating regulatory expectations.

    Source: Reuters