The European Union has taken a calibrated step toward recalibrating its landmark artificial intelligence framework, as policymakers seek to balance regulatory ambition with economic competitiveness. In a move that underscores a broader policy shift in Brussels, the European Parliament voted last week to delay key compliance deadlines under the EU AI Act while simultaneously advancing targeted prohibitions on high-risk uses of the technology.
The decision comes amid a wider review by the European Commission of the bloc’s digital rulebook, an effort aimed at simplifying regulatory requirements and making the EU more attractive for technology investment relative to the United States and China. Against that backdrop, the Parliament’s vote reflects growing recognition that overly compressed compliance timelines could undermine both innovation and the enforceability of the rules themselves.
Under the revised timeline, companies developing “high-risk” AI systems will now have until December 2027 to comply with the AI Act’s requirements, a significant extension from earlier deadlines. Sector-specific obligations for industries already subject to stringent safety regimes, including medical devices and automotive systems, have been pushed further to August 2028.
The delay offers critical breathing room for major technology firms such as Google, Microsoft, Meta and OpenAI, all of which are integrating AI across core products and services that fall within the Act’s broad definition of high-risk use cases. These include applications affecting employment decisions, access to essential services, law enforcement, and critical infrastructure.
At the same time, reported The Tech Buzz, lawmakers made clear that flexibility on implementation timelines will not extend to applications deemed inherently harmful. In the same vote, Parliament approved an outright ban on so-called “nudify” applications, which use generative AI to create non-consensual intimate images. This marks one of the first instances of the EU explicitly prohibiting a specific category of AI-enabled products, signaling a willingness to act swiftly where risks to individuals are immediate and severe.
We’d love to be your preferred source for news.
Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!
The dual-track approach highlights the central tension facing EU policymakers. On one hand, industry stakeholders have argued that compliance with the AI Act’s requirements—ranging from risk assessments and documentation to human oversight mechanisms—is technically complex and resource-intensive. Rushing implementation, they warn, could produce superficial compliance and unintended safety gaps.
Read more: EU Charges Major Adult Platforms Over Child Safety Failures Under Digital Services Act
On the other hand, public and political pressure to address visible harms from AI systems continues to mount. The rapid proliferation of deepfake technologies, particularly those targeting women and minors, has intensified calls for decisive regulatory action. By pairing deadline extensions with immediate bans, lawmakers are attempting to reconcile these competing imperatives, according to Tech Buzz.
Importantly, not all provisions are being delayed. Baseline transparency requirements, such as obligations to watermark synthetic content, remain on their original timeline, reinforcing the EU’s commitment to foundational safeguards even as it adjusts more complex compliance obligations.
The Parliament’s move also reflects concerns raised by smaller European firms, which have argued that the original schedule disproportionately favored large, well-resourced U.S. technology companies. Extending deadlines may help level the playing field, although questions remain about whether compliance costs will ultimately reinforce market concentration.
The legislative process is not yet complete. The revised timeline and related provisions will now enter negotiations with the Council of the European Union, where member states will seek to align their priorities with those of Parliament. These trilogue discussions are expected to address not only timing but also the practical integration of AI-specific rules with existing sectoral regulations.
More broadly, the outcome of those negotiations will feed into the European Commission’s ongoing effort to streamline digital regulation across the bloc. That initiative reflects a strategic pivot: maintaining the EU’s role as a global standard-setter while ensuring that its regulatory framework does not deter investment or slow the deployment of emerging technologies.
For now, the Parliament’s vote sends a clear signal. The EU is willing to adjust the pace of its most ambitious digital regulations to ensure they are workable, but it remains equally prepared to draw hard lines around uses of AI that it views as fundamentally incompatible with European values.