A PYMNTS Company

Age-Restriction Laws Are Proliferating; So Too Are the Difficult Tradeoffs Policymakers Face

 |  December 23, 2025

Governments around the world are increasingly turning to online age-restriction laws as a central tool for protecting children from harmful digital content. But the growing reliance on these measures is exposing deep policy, legal, and practical challenges. While there is broad consensus that minors face real risks online, the divergence in regulatory approaches across jurisdictions highlights the difficulty of designing rules that are both effective and proportionate to the purported threat.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The debate is unfolding against the backdrop of a broader global push for online safety regulation. In the European Union, the Digital Services Act (DSA) has become a reference point for platform accountability, including obligations to mitigate systemic risks linked to content dissemination and platform design. Protecting minors is treated as a distinct regulatory objective within this framework, supplementing earlier rules under the Audiovisual Media Services Directive that restricted children’s access to harmful audiovisual content without banning it outright for adults.

    According to an overview by Tech Policy Press, similar child safety initiatives have emerged in the United Kingdom, Australia, and the United States, reflecting heightened political attention to the issue.

    As children’s media consumption has shifted from traditional broadcasting to social media and other digital services, policymakers have increasingly emphasized age-based access controls, per Tech Policy. In mid-2025, the European Commission issued nonbinding guidelines under the DSA outlining measures platforms can adopt to protect minors from risks such as grooming, cyberbullying, addictive behaviors, and harmful commercial practices. These measures include design changes, content moderation tools, and age-appropriate defaults, rather than blanket bans. Nonetheless, recent legislative and political proposals have moved toward stricter interventions, including minimum age requirements and prohibitions on certain platform features deemed especially harmful to young users.

    Read more: EU’s Digital Services Act Moves from Reports to Penalties as Platforms File Risk Disclosures 

    This shift reflects what TPP characterizes as a form of “techno-legal solutionism,” in which complex social and developmental challenges are addressed through relatively simple technical mandates, such as age verification or outright access restrictions. Australia’s recently implemented ban on social media access for users under 16 exemplifies this approach, as do proposals in parts of Europe calling for EU-wide minimum ages and expanded liability for platform executives.

    Critics argue that such measures risk oversimplifying the sources of online harm while creating tensions with existing principles of intermediary liability and fundamental rights protections.

    One core difficulty lies in defining the scope of regulation. Jurisdictions vary widely in which services are covered, with most proposals focusing on social media platforms while often excluding gaming, messaging, or smaller online services. These distinctions can appear arbitrary, per TPP, and may fail to account for differing risk profiles across platforms and user populations. Another challenge is ensuring that protections for minors do not impose disproportionate burdens on adults, including intrusive verification requirements that raise privacy and data protection concerns.

    Age verification itself presents additional complications. Hard verification methods relying on government-issued identification or biometric data may exclude users who lack formal IDs, exacerbate digital inequality, or create new surveillance risks. Overly strict bans could also push younger users toward less regulated platforms with weaker safety controls, undermining the original policy objective.

    While politically appealing, overly rigid rules risk running up against the law of unintended consequences, leaving policymakers with unresolved trade-offs as age-restriction laws proliferate.