● LIVE   Breaking News & Analysis
Bitvise
2026-05-10
Privacy & Law

EU Strikes Last-Minute Deal to Push Back AI Act Compliance Deadlines

EU lawmakers agree to extend AI Act deadlines: high-risk rules now effective Dec 2027/2028, easing compliance burden for businesses.

EU Lawmakers Agree to Delay High-Risk AI Rules

European Union negotiators reached a provisional agreement in the early hours of Thursday to extend the toughest compliance deadlines under the bloc's Artificial Intelligence Act, giving businesses additional time to adapt. The deal, struck between the European Parliament and the European Council, pushes back the original August 2, 2026 deadline to December 2, 2027 for stand-alone high-risk AI systems and August 2, 2028 for AI integrated into products covered by EU safety regulations.

EU Strikes Last-Minute Deal to Push Back AI Act Compliance Deadlines
Source: www.computerworld.com

“Today’s agreement on the AI Act significantly supports our companies by reducing recurring administrative costs,” said Marilena Raouna, Cyprus’s deputy minister for European affairs, in a statement from the Council. Cyprus holds the rotating presidency of the Council, which represents the EU’s 27 member states.

Key Provisions of the Deal

The provisional agreement eliminates overlapping obligations for AI used in machinery products, which will now follow only sector-specific safety rules with equivalent health and safety protections, the European Parliament said. It also narrows the definition of “safety component” under the AI Act. AI features that merely assist users or improve performance will no longer automatically be classified as high-risk, provided their failure does not threaten health or safety.

For regulated sectors like medical devices, toys, lifts, machinery, and watercraft, negotiators agreed on a mechanism to resolve conflicts between the AI Act and existing laws, according to the Council. The deadline for member states to establish AI regulatory sandboxes has been pushed back one year to August 2, 2027. However, watermarking obligations for AI-generated content will apply earlier than the European Commission proposed: from December 2, 2026, instead of February 2, 2027, the Parliament clarified.

Relief for Mid-Size Companies

Small mid-cap companies now benefit from exemptions previously limited to small and medium-sized enterprises, the Council noted. The deal also specifies that the EU’s AI Office will oversee general-purpose AI systems centrally, while national authorities retain responsibilities in law enforcement, border management, judicial matters, and financial institutions.

EU Strikes Last-Minute Deal to Push Back AI Act Compliance Deadlines
Source: www.computerworld.com

“With this agreement, we show that politics can move just as quickly as technology,” said Arba Kokalari, the Parliament’s co-rapporteur for the Internal Market and Consumer Protection Committee. The breakthrough came just nine days after earlier talks collapsed without agreement.

Background

The AI Act, first proposed in 2021, aims to regulate artificial intelligence based on risk levels, with high-risk systems facing the strictest requirements. The original compliance deadline of August 2, 2026 prompted concerns from industry groups and some member states about insufficient preparation time. The provisional deal must still be formally adopted by both the Parliament and Council before becoming law, with legislators aiming to complete the process before August 2. Until then, the original deadline remains in effect as drafted.

What This Means

Businesses now have more breathing room to prepare for high-risk AI compliance, reducing the risk of last-minute penalties. The extended timelines and narrowed scope are expected to lower administrative burdens, particularly for mid-size firms that previously faced tighter constraints. However, earlier watermarking obligations mean companies must accelerate labeling of AI-generated content. The centralized oversight by the EU AI Office could streamline enforcement, though national authorities still wield significant power in sensitive areas. Overall, the deal reflects a recognition that regulators must balance innovation with safety in the fast-evolving AI landscape.