Cybersecurity consultant Brian Levine, executive director of FormerGov, has voiced concerns regarding the European Parliament’s decision to delay significant restrictions under the proposed AI Act until 2027. This postponement, he argues, places Chief Information Officers (CIOs) in a state of “regulatory limbo.” However, Levine emphasizes that the delay does not mitigate the existing risks associated with AI systems. “Enterprises still own the risk their AI systems create,” he stated.
Levine underscored that the operational, legal, and reputational threats emanating from inadequately governed AI are already present. He cautioned that CIOs should not interpret the delay as a chance to relax their vigilance. “The organizations that wait for perfect regulatory clarity are the ones most likely to discover that their models have been quietly generating compliance, privacy, or safety liabilities long before any enforcement clock started ticking,” he noted.
The European Parliament has proposed that for “high-risk AI systems specifically listed in the regulation”—which include those involving biometrics and applications in critical sectors such as education, employment, law enforcement, and border management—the regulation will take effect on December 2, 2027. For AI systems that fall under existing EU sectoral legislation concerning safety and market surveillance, compliance will be expected by August 2, 2028. Furthermore, members of the Parliament are in favor of granting providers until November 2, 2026, to adhere to new rules concerning watermarking AI-generated content, such as audio, images, video, or text, to specify its origin.
This regulatory landscape is crucial as the European Union seeks to establish a comprehensive framework to govern the use of artificial intelligence. While the delay may provide some breathing room for organizations to prepare for compliance, the pressing question remains: how will companies manage the risks associated with AI in the meantime?
As businesses increasingly integrate AI technologies into their operations, the potential for liabilities tied to ethical missteps or safety concerns poses a significant challenge. For many companies, navigating this new terrain requires a proactive approach to risk management, rather than a wait-and-see strategy. The ramifications of poorly governed AI could extend beyond immediate compliance issues, impacting company reputation and consumer trust.
The AI Act represents a pivotal moment in the regulatory journey of artificial intelligence. By delineating high-risk applications and setting compliance timelines, the EU aims to create a balanced framework that safeguards public interests while fostering innovation. The implications of these regulations are far-reaching, affecting sectors from health care to education, where reliance on AI systems is accelerating.
As organizations grapple with the implications of these delayed regulations, the broader industry response will be critical to shaping how AI is developed and deployed. Companies that prioritize ethical AI deployment and compliance with forthcoming regulations may find themselves at a competitive advantage in an increasingly cautious market.
While the extended timelines offer some latitude for adjustment, they also serve as a reminder of the complexities involved in governing rapidly evolving technologies. As AI continues to permeate various aspects of life and business, the importance of establishing robust frameworks cannot be overstated. Organizations must not only prepare for compliance but also cultivate a culture of responsibility that addresses the ethical implications of AI usage.
In conclusion, as the deadline for the AI Act approaches, the onus will be on enterprises to mitigate risks and ensure that their AI systems adhere to evolving standards. The landscape of AI regulation is shifting, and those who adapt swiftly may not only comply with the law but also lead the way in ethical AI practices.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































