A PYMNTS Company

EU’s Digital Services Act Moves from Reports to Penalties as Platforms File Risk Disclosures 

 |  December 18, 2025

Brussels is preparing to show that the European Union’s Digital Services Act is not merely a rulebook. As the European Commission moves toward what TechPolicy Press says would be its first fine against a “Very Large Online Platform” under the DSA, the region’s online-safety regime is shifting from paperwork to penalties—an evolution that could alter product road maps and compliance budgets for the world’s biggest digital intermediaries.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Most Very Large Online Platforms (VLOPs) have now publicly released their 2025 reports on assessing and mitigating “systemic risk,” disclosures mandated by Articles 34 and 35 of the DSA. The law requires the largest platforms and search services to identify risks tied to how their products operate at scale, and to outline the controls they claim will reduce those risks.

    As reviewed by TechPolicy Press, the disclosures repeatedly address risks to minors. The outlet also points to recurring attention to researcher access, user appeals (including out-of-court dispute resolution bodies), reporting channels for “trusted flaggers,” and privacy.

    Beyond those procedural obligations, the reports address information integrity, election-related threats, and crisis planning—topics where regulators are likely to ask whether mitigations are working in practice rather than merely described on paper.

    The filings are also imperfect instruments. Comparisons across platforms and across years can be hard to make with confidence because the companies are still revising report structures, methodologies, categorizations, and data inputs. “None of the platforms appears to be fully transparent about their data and calculations,” TechPolicy notes, adding that some platforms continue to redact key content-moderation figures, making public auditing impossible.

    Still, the documents matter because they force companies to put sensitive risk judgments in writing. Per TechPolicy, “the reports remain rare documents where tech companies publicly confront the societal risks their platforms may pose.” In a supervisory model built around investigations, audits, and sanctions, what platforms say in these filings can shape the questions regulators ask next and the evidence they demand when claims look thin.

    TechPolicy Press’s review of the 2025 reports also highlights what is missing. Despite rapid advances in photorealistic synthetic media, the outlet found surprisingly little new discussion of generative-AI risks “as a whole,” with Google described as an exception. Where AI did appear, it tended to be attached to specific problems, such as “nudifier” apps, scams, or synthetic impersonation of public figures.

    For the digital economy, the implications extend beyond a single enforcement action. The DSA is pushing the largest intermediaries toward supervised risk governance: documentation, mitigation testing, stakeholder engagement, and the prospect of fines if regulators conclude that harms are being under-managed.

    That shift will increase compliance costs and accelerate product changes around youth protections, reporting and appeals systems, and transparency reporting. It may also deepen transatlantic divergence, as firms try to reconcile EU risk obligations with U.S. political and legal debates about platform speech. Either way, the DSA’s next phase will be judged less by what platforms publish and more by what regulators prove and enforce.