Connect with us

Hi, what are you looking for?

AI Technology

EU AI Act Negotiations Stall, High-Risk System Deadline Remains Set for Aug 2, 2026

European Commission’s AI Act negotiations falter after 12 hours, as enforcement of high-risk systems remains set for August 2, 2026, raising governance concerns.

The European Commission’s proposed reforms to the AI Act, known as the Digital Omnibus, faced significant setbacks during negotiations in Brussels on April 28. After 12 hours of discussions, the trilogue aimed at finalizing the package collapsed, leaving many in the AI governance community concerned about potential implications for their work as enforcement deadlines approach.

Critical issues, including the timeline for enforcing requirements on high-risk AI systems and the permissible use of personal data for training AI, remain unresolved. A follow-up trilogue is anticipated in two weeks with a new mandate, but doubts linger about whether European Union institutions can agree on a clear path before the enforcement of high-risk systems is set to begin on August 2, 2026.

Originally expected to be a straightforward process, the negotiations have become increasingly convoluted. This last session was the second and final one scheduled for the Omnibus package, initially introduced by the European Commission in November 2025. Although the European Parliament, the Council of the European Union, and the European Commission had aligned on key deadlines—December 2, 2027, for standalone Annex III high-risk systems, and August 2, 2028, for AI embedded in regulated products—disagreements over the intersectionality of existing digital rules proved contentious.

As the enforcement deadline nears, questions have arisen about whether AI systems integrated into products already under EU sectoral safety legislation—such as medical devices, industrial machinery, toys, and connected cars—should be exempt from additional AI Act requirements. The European Parliament’s proposal for such carve-outs was met with resistance from the Council and Commission, which argue that granting these exemptions could undermine the AI Act’s regulatory framework.

This debate centers on Annex I, which lists products covered by harmonized EU safety legislation. The Parliament’s push to remove a significant category of high-risk AI systems from the AI Act’s direct oversight into sectoral laws like the Machinery Regulation, Medical Device Regulation, and In-Vitro Diagnostics Regulation is pivotal. Michael McNamara, the rapporteur on the AI file, acknowledged the rationale behind reducing overlapping obligations but cautioned that routing AI governance through existing sectoral legislation could lead to “deregulatory rather than simplifying” outcomes—a concern echoed by over 40 civil society organizations.

The Council’s reluctance to support these carve-outs, particularly in sensitive areas like financial supervision and law enforcement, underscores its commitment to maintaining the AI Act’s comprehensive framework. Any dismantling of this framework for product-embedded systems could represent a significant shift in regulatory approach. Sebastian Hallensleben, chair of CEN-CENELEC AI standards development, expressed hope that structural changes to the AI Act, especially concerning high-risk applications, would be avoided to preserve the foundational work undertaken in recent years.

While deliberations are ongoing, a potential pathway toward a simplified agreement may emerge before the next trilogue in mid-May. However, the upcoming EU presidency transition to Ireland on June 30 could further complicate negotiations, possibly steering the discussions in a new direction. Prior to the latest breakdown, there was significant consensus on the next steps for the AI Act, including the delay of high-risk system implementation until December 2, 2027.

As of now, the status quo remains in effect. The AI Act has been enacted, and the enforcement of high-risk systems is still scheduled for August 2, 2026. Many provisions not included in the Omnibus remain crucial for AI governance professionals to address. For instance, preparations must be made for compliance with Article 50, which mandates transparency requirements for new generative AI systems, including user-facing disclosures and machine-readable markings of AI-generated content, effective from August 2 for systems launched on or after that date. Ensuring that content marking and watermarking capabilities are production-ready will be essential.

As the digital landscape continues to evolve, the outcomes of these negotiations will have lasting implications for AI governance across Europe, shaping the regulatory environment and influencing standards development for years to come.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

ADGM and Hashed report identifies regulatory hurdles and suggests stablecoins as key to scaling tokenization, as AI integration reshapes financial markets.

AI Regulation

Trump administration challenges Colorado's forthcoming AI hiring law, backed by Elon Musk, amid rising scrutiny on automated employment practices.

AI Technology

SkyBiometry unveils a comprehensive AI infrastructure suite, leveraging high-performance computing to accelerate LLM and generative AI development across industries.

AI Regulation

Organizations must adopt comprehensive AI governance frameworks to navigate the evolving EU and U.S. regulations, ensuring compliance and mitigating risks effectively.

AI Regulation

Anthropic's Claude Mythos launches with minimal EU regulatory input, raising alarms as concerns grow over unregulated AI amid a $300M pro-AI campaign in the...

AI Regulation

EU Commission mandates Meta to restore third-party AI assistants' access to WhatsApp after ruling restrictions violate antitrust laws, risking competition.

AI Regulation

Efekta Education Group forms a global advisory board led by José Manuel Barroso to shape AI integration in education, targeting improved student outcomes and...

AI Regulation

Bureau Veritas unveils AI systems audit to help European firms comply with the EU AI Act, as shares rise to €27.21, highlighting 8.93% annual...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.