In 2026, organizations in regulated engineering sectors like aviation, energy, and advanced manufacturing are not debating the utility of artificial intelligence (AI) but rather when and how it will be integrated into their toolchains. Sergey Irisov, Head of IT & Digital Transformation at ZeroAvia, emphasizes that effective governance is critical to successfully deploying AI in these environments. His recent insights outline a robust framework for AI governance, which aims to transition from pilot projects to reliable outcomes without accruing compliance debt.
Unlike consumer software, AI applications in highly regulated fields face unique challenges. Even if a model is technically sound, regulatory bodies can deem it unacceptable if there’s a lack of transparency regarding data provenance, decision-making processes, and the model’s evolution over time. This scrutiny can halt progress, underscoring the need for a comprehensive governance strategy that treats AI as an integral part of the product lifecycle rather than an afterthought.
Irisov identifies a common problem in stalled AI initiatives: while proof-of-concept projects may demonstrate value, organizations often struggle to scale these into controlled environments. The gap lies not in the technical capabilities of data science but in governance and architecture. A straightforward diagnostic question can reveal these gaps: “If this output influenced a decision, can we trace the entire chain of its derivation a year later?” If the answer is no, the initiative remains a demo rather than a viable production candidate.
The initial focus should not be on selecting the model itself but rather on establishing clear decision boundaries for automation. Irisov suggests classifying use cases into three tiers: “Assist,” “Advise,” and “Automate (bounded),” each with varying degrees of risk and control expectations. This tiered approach helps organizations clarify which decisions can be automated and ensures rigorous oversight for high-risk applications.
Central to effective AI governance is a structured stack that includes lifecycle data ownership, identity and traceability, versioned training inputs, model lifecycle management, controlled deployment processes, and continuous monitoring. When these elements are in place, audits become more straightforward and adoption smoother, since the parameters for compliance are well defined.
Irisov warns against the pitfall of isolating AI outputs within a separate platform; such practices can undermine traceability and lead to disputes over which results informed decisions. Instead, AI models should be treated as first-class lifecycle artifacts, integrated into existing product management frameworks. This integration includes linking model versions to configuration baselines and ensuring training datasets reference controlled sources.
As organizations increasingly adopt AI technologies, they also face new cybersecurity challenges unique to AI systems, such as data poisoning and model tampering. Irisov asserts that these challenges are not merely IT risks; they represent systemic risks that must be addressed with robust security measures. Protecting the entire data chain—from training to inference—is essential for maintaining integrity and compliance.
To ensure the sustainability of AI capabilities, organizations should adopt a product operating model rather than treating AI as a one-off project. This approach includes appointing a product owner, establishing a roadmap, securing recurring funding, and maintaining clear agreements with engineering and compliance stakeholders. Such continuity is crucial as models evolve, data changes, and regulations adapt over time.
For organizations looking to make quick progress in their AI initiatives, Irisov recommends a focused 30-day plan. This plan emphasizes building foundational capabilities by selecting high-value, low-risk use cases, defining decision boundaries, and establishing controlled dataset processes. By prioritizing these elements, organizations can unlock multiple use cases while laying a strong governance framework.
Ultimately, Irisov concludes that the deployment of enterprise AI in regulated engineering environments is fundamentally a systems engineering challenge. When governance and architecture are prioritized from the outset, models can be more easily deployed, defended, and improved. Conversely, if these elements are retrofitted, teams may find themselves burdened by manual validations, stalled audits, and fragile adoption.
About the Author
Sergey Irisov is Head of IT & Digital Transformation at ZeroAvia, specializing in enterprise architecture and digital toolchains for regulated engineering, with a focus on PLM/ALM, digital thread governance, and audit-ready operating models across aerospace and advanced manufacturing.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































