A PYMNTS Company

European Commission Provides Guidelines for Complying With AI Act

 |  July 20, 2025

With the August 2 start date for the AI Act to take full effect rapidly approaching, the European Commission has been filling in some of the blanks in the new rules for general purpose AI (GPAI) models. Last week, the Commission published guidelines for signatories to the AI Code of Practice and on the scope of obligations for developers and deployers of such GPAI models.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    On the Code of Practice, the guidelines clarify that signing the voluntary code will not provide a “presumption of conformity with the AI Act,” but that the AI Office may “take a provider’s adherence to the Code of Practice into account when monitoring its” compliance with the Act and may “favourably take into account commitments made in the Code of Practice,” when assessing any fines.

    Providers that do not sign the code, as Meta has said last week it will not, must still comply with the law, but can demonstrate their compliance through methods other than the recommendations of the code.

    Notably, specific guidelines on how to sign on to the code suggest that signatories will be able to opt-out of specific sections of the code, but that, “Any opt-out from chapters of the code of practice results in losing the benefits of facilitating the demonstration of compliance in that respect.”

    AI developers have been asking the Commission for the option to pick-and-choose parts of the code. But many in the creative industries have been wary of such a policy, fearing that many signatories will choose to opt-out of the chapter on copyright. Still missing in the guidelines as the template being developed by the AI Office for precisely how GPAI providers must disclose the “sufficiently detailed summary” of the data used in training GPAI models as required by the law.

    The guidelines also hedge on when downstream fine-tuning of a models results in a new model that must comply with all the obligations of model providers.

    Read more: California Courts Lead the Nation on Regulating AI Use

    “General-purpose AI models may be further modified or fine-tuned into new models (recital 97 AI Act). Accordingly, downstream entities that fine-tune or otherwise modify an existing general-purpose AI model may become providers of new models,” the guidelines say. “The specific circumstances in which a downstream entity becomes a provider of a new model is a difficult question with potentially large economic implications, since many organisations and individuals fine-tune or otherwise modify general-purpose AI models developed by another entity.”

    The guidelines on the scope of obligations for GPAI providers focus on the computational resources used in training to distinguish among model types.

    General purpose models are defined as a model “trained on a broad range of natural language data (i.e. text) curated and scraped from the internet and other sources… using 10^24” floating-point operations per second (FLOP), and “should be capable of competently performing a wide range of distinct tasks.”

    Models produced with the same level of computation resources but are trained for specific tasks, such as transcribing speech to text or generating speech from text are not consider GPAIs because they are insufficiently general in their applications.

    The guidelines on the scope of obligations also fill in some of the missing details from the Code of Practice guidelines. In particular, they provide greater clarity on when downstream modifications of a GPAI model result in a new model.

    “The Commission deems that it is not necessary for every modification of a general purpose AI model to lead to the downstream modifier being considered the provider of the modified general-purpose AI model,” the guidelines state. “Instead, the Commission considers a downstream modifier to become the provider of the modified general-purpose AI model only if the modification leads to a significant change in the model’s generality, capabilities, or systemic risk.”

    While the obligations on GPAI providers legally go into effect on August 2nd, the Commission will not begin any enforcement action until August 2, 2026.