Early Thursday morning (7 May), EU co-legislators agreed to delay and amend the AI Act, the world’s first attempt at regulating artificial intelligence through a risk-based tier system. The deal is part of the broader digital rules simplification package proposed by the Commission in November of last year.
This comes after negotiations between the Parliament and Council collapsed late last month over internal disagreement within the German government. Berlin had been pushing to exempt industrial AI applications (e.g., machinery) from the law’s scope, arguing it created a double regulatory burden on top of existing sectoral rules. Once member countries’ ambassadors aligned behind the German position on Wednesday morning (6 May), it laid the ground for a deal with the Parliament at the night-time negotiations.
A demand from industry, this means that companies will now only need to comply with AI requirements under sectoral legislation, not both that and the AI Act. The Commission will issue delegated acts under the machinery regulation to set equivalent health and safety standards, the key safeguard the Parliament secured in exchange. Other sectors floated for a similar exemption, such as medical devices, did not make the cut.
German Christian-conservative parliamentarian Axel Voss called it a victory for “common sense,” explicitly thanking Chancellor Friedrich Merz. Civil society groups disagreed, warning in a last-minute letter that the tweak would exclude “a wide range of industrial and consumer AI systems” from core safeguards.
Less controversial but equally significant: key deadlines are being pushed back. Obligations on high-risk AI systems – covering biometrics, critical infrastructure, education, employment, law enforcement and border management – will now apply from December 2027, instead of August 2026. AI systems used as safety components under sectoral legislation get even longer, until August 2028. Watermarking obligations, which require AI-generated content such as images, audio and video to carry detectable markers identifying them as machine-made, are also delayed to December 2026.
A ban on so-called nudifier apps was added to the package following the controversy around Elon Musk-owned xAI’s Grok. The new rules prohibit placing AI systems on the EU market that generate sexualised imagery of identifiable people without consent – whether images, video or audio. Child sexual abuse material generated by AI is also explicitly banned. Companies have until 2 December 2026 to comply.
The deal also narrows the definition of “safety component,” meaning AI that merely assists users or optimises performance won’t automatically trigger high-risk obligations. SME exemptions are extended to small mid-cap enterprises. Enforcement of certain general-purpose AI systems is streamlined under the EU’s AI Office to prevent fragmentation across 27 national jurisdictions.
Everyone – apart from civil society – is happy
All sides claimed victory. The Cypriot Council presidency said the deal “reduces recurring administrative costs” and ensures “legal certainty.” Commission President von der Leyen praised its “innovation-friendly environment.” Consumer groups and digital rights organisations were less enthusiastic, having advocated against the machinery exemption until the final hours.
The provisional agreement still needs formal adoption by Parliament and Council, expected before 2 August 2026. Attention now turns to the tech sovereignty package, currently scheduled for 27 May. There was plenty of movement on that front this week too, with new details emerging on the French-German sovereignty definition, open source provisions, and inclusion of the semiconductor ecosystem in the revised Chips Act.

