The European Commission’s proposed reforms to the AI Act, known as the Digital Omnibus, faced significant setbacks during negotiations in Brussels on April 28. After 12 hours of discussions, the trilogue aimed at finalizing the package collapsed, leaving many in the AI governance community concerned about potential implications for their work as enforcement deadlines approach.
Critical issues, including the timeline for enforcing requirements on high-risk AI systems and the permissible use of personal data for training AI, remain unresolved. A follow-up trilogue is anticipated in two weeks with a new mandate, but doubts linger about whether European Union institutions can agree on a clear path before the enforcement of high-risk systems is set to begin on August 2, 2026.
Originally expected to be a straightforward process, the negotiations have become increasingly convoluted. This last session was the second and final one scheduled for the Omnibus package, initially introduced by the European Commission in November 2025. Although the European Parliament, the Council of the European Union, and the European Commission had aligned on key deadlines—December 2, 2027, for standalone Annex III high-risk systems, and August 2, 2028, for AI embedded in regulated products—disagreements over the intersectionality of existing digital rules proved contentious.
As the enforcement deadline nears, questions have arisen about whether AI systems integrated into products already under EU sectoral safety legislation—such as medical devices, industrial machinery, toys, and connected cars—should be exempt from additional AI Act requirements. The European Parliament’s proposal for such carve-outs was met with resistance from the Council and Commission, which argue that granting these exemptions could undermine the AI Act’s regulatory framework.
This debate centers on Annex I, which lists products covered by harmonized EU safety legislation. The Parliament’s push to remove a significant category of high-risk AI systems from the AI Act’s direct oversight into sectoral laws like the Machinery Regulation, Medical Device Regulation, and In-Vitro Diagnostics Regulation is pivotal. Michael McNamara, the rapporteur on the AI file, acknowledged the rationale behind reducing overlapping obligations but cautioned that routing AI governance through existing sectoral legislation could lead to “deregulatory rather than simplifying” outcomes—a concern echoed by over 40 civil society organizations.
The Council’s reluctance to support these carve-outs, particularly in sensitive areas like financial supervision and law enforcement, underscores its commitment to maintaining the AI Act’s comprehensive framework. Any dismantling of this framework for product-embedded systems could represent a significant shift in regulatory approach. Sebastian Hallensleben, chair of CEN-CENELEC AI standards development, expressed hope that structural changes to the AI Act, especially concerning high-risk applications, would be avoided to preserve the foundational work undertaken in recent years.
While deliberations are ongoing, a potential pathway toward a simplified agreement may emerge before the next trilogue in mid-May. However, the upcoming EU presidency transition to Ireland on June 30 could further complicate negotiations, possibly steering the discussions in a new direction. Prior to the latest breakdown, there was significant consensus on the next steps for the AI Act, including the delay of high-risk system implementation until December 2, 2027.
As of now, the status quo remains in effect. The AI Act has been enacted, and the enforcement of high-risk systems is still scheduled for August 2, 2026. Many provisions not included in the Omnibus remain crucial for AI governance professionals to address. For instance, preparations must be made for compliance with Article 50, which mandates transparency requirements for new generative AI systems, including user-facing disclosures and machine-readable markings of AI-generated content, effective from August 2 for systems launched on or after that date. Ensuring that content marking and watermarking capabilities are production-ready will be essential.
As the digital landscape continues to evolve, the outcomes of these negotiations will have lasting implications for AI governance across Europe, shaping the regulatory environment and influencing standards development for years to come.
See also
Tesseract Launches Site Manager and PRISM Vision Badge for Job Site Clarity
Affordable Android Smartwatches That Offer Great Value and Features
Russia”s AIDOL Robot Stumbles During Debut in Moscow
AI Technology Revolutionizes Meat Processing at Cargill Slaughterhouse
Seagate Unveils Exos 4U100: 3.2PB AI-Ready Storage with Advanced HAMR Tech




















































