A PYMNTS Company

European Commission Begins Work on Code of Practice for Identifying and Detecting AI Content

 |  January 7, 2026

The European Commission has begun work on a voluntary code of practice for transparency on AI-generated content. A first draft of the code, developed by independent experts in consultation with stakeholders, was published in December 17th and work on the code is expected to be completed by mid-2026.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    While not binding, the code is intended to assist AI developers and deployers in complying with their respective obligations under Article 50 of the AI Act. The article, which takes effect August 2, 2026, requires the labeling and disclosure of when AI is used to generate or manipulate content.

    Work on the code is being conducted in two working groups, according to a strategy outline published by the Commission,  one for providers of general-purpose AI systems, and one for deployers of generative AI systems.

    The providers’ working group is focused on developing machine-readable means of identifying and detecting the outputs of AI systems, including text, images, audio and video, per the outline. Article 50 stipulates that technical solutions used must be effective, interoperable, robust, and reliable as far as is technically feasible.

    The deployer group is charged with developing standardized protocols for disclosing when content has been artificially generated or manipulated so as to constitute of deep fake, including images, audio, or video which resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.”

    The Code of Practice will not include specific technical mandates. According to the draft, provider signatories “will ensure that AI-generated or manipulated content is marked with an imperceptible watermark. This watermark will be directly interwoven within the content in a manner that is difficult for it to be separated from the content, and that withstands typical processing steps that may be applied to the content.”

    Related: US Courts Poised to Shape the Future of AI Copyright Battles in 2026

    Applying the watermark can be triggered at any point in the process, including during model training, during inference, or in the output, but should be implemented in the “best possible technical and economically viable way.”

    Signatories must also provide, free of charge, an API, user interface, or standalone AI detector, to “enable users and other interested parties to verify with confidence scores whether content has been generated or manipulated by their AI system or model.”

    For markers that include provenance information on content, the user interface should also “disclose a complete set of the provenance information.”

    As for deployers of gen-AI systems, signatories should apply consistent disclosure of origin” of fully AI-generated content,“ and use “a common taxonomy and icon as specified in the following measures.”

    For AI-assisted content, disclosures should identify which portions or elements of the content were manipulated or generated by AI.

    With respect to deepfakes specifically, signatories “will disclose the deep fake content in a clear and distinguishable manner at the latest at the time of the first exposure, including displaying  the icon “in a non-intrusive way consistently throughout” real-time video. For non-real time video, signatories have a choice of disclosure methods, including a disclaimer at the beginning of the exposure; placing the icon consistently throughout the exposure in an appropriate fixed place; or a disclaimer in the closing credits but only when accompanied by one of the other measures.

    Work on a second draft will begin on January 12th with a target delivery date in March.