Custom-built AI applications are increasingly shaping the landscape of cybersecurity, with Gartner projecting that by 2028, half of enterprise cybersecurity incident response initiatives will focus on these systems. This shift comes as organizations adopt AI-driven software across various business processes and customer-facing services, often deploying applications before completing necessary testing and security reviews.
Christopher Mixter, a VP Analyst at Gartner, emphasized the urgency of this transition. “AI is evolving quickly, yet many tools—especially custom-built AI applications—are being deployed before they’re fully tested,” he noted. The complexity and dynamism of these systems pose significant security challenges over time. Currently, most security teams lack established processes for handling AI-related incidents, which can prolong resolution times and escalate required effort.
Traditionally, incident response teams have focused on detection, containment, eradication, and recovery. However, AI systems introduce additional variables, such as model behavior, data handling, and integration with other services, complicating investigations. An issue may be a security breach, a software defect, a data quality problem, or a combination of these, making it difficult for teams to pinpoint the cause.
In response to these challenges, Gartner anticipates broader adoption of AI security platforms. The firm predicts that over 50% of enterprises will utilize these platforms by 2028 to manage both third-party AI services and custom-built applications. Organizations are increasingly seeking a unified management layer to oversee multiple AI tools, as concerns about prompt injection attacks, data misuse, and inconsistent controls among departments grow. Governance demands are also escalating as business units rapidly implement new AI features.
Security leaders are advised to evaluate whether their tools cover both in-house and external AI usage, ensuring visibility into AI activities and enforcing policies across custom systems and vendor-provided services. As organizations navigate this complex landscape, the need for effective governance and risk management becomes paramount.
Gartner’s findings also highlight the rising pressure of compliance, forecasting that by the end of 2027, 75% of regulated organizations will face fines exceeding 5% of global revenue due to inadequate manual compliance processes related to AI. This prediction reflects a regulatory environment that is continuously evolving across regions. Although requirements differ, Gartner expects a convergence around structured AI risk management, intensifying the pressure on organizations that still depend on spreadsheets and ad hoc evidence collection for compliance reporting.
The introduction of AI safety regulations adds another layer of complexity for risk and compliance teams, which are already managing security, privacy, and cyber risk requirements. AI-specific regulations extend the focus to include model risk, data provenance, and ongoing monitoring, complicating the compliance landscape.
Furthermore, through 2030, Gartner anticipates that 33% of IT work will be dedicated to remediating AI data debt—weaknesses in the datasets that organizations rely on. This term encompasses unstructured or poorly secured information across file shares, SaaS platforms, and legacy systems. As AI features increasingly allow access to internal data stores, gaps in data classification and access control become more apparent. Data loss prevention programs are broadening their scope to encompass AI-driven data flows, including monitoring requests generated by generative AI tools.
As organizations grapple with these challenges, the focus on data sovereignty within cloud security is expected to increase. By 2027, Gartner forecasts that 30% of organizations will demand comprehensive sovereignty over their cloud security controls, driven by geopolitical uncertainties. This focus includes assessing where data is stored, who can access it, and how cloud security is managed across borders. Such considerations will influence vendor selection and contract terms, particularly for services reliant on cloud-hosted control planes.
Identity management also remains a significant concern. Gartner predicts that by 2028, 70% of Chief Information Security Officers (CISOs) will employ identity visibility and intelligence capabilities to minimize the attack surface associated with identity and access management. As organizations incorporate cloud services, automation, and multiple identity tools, the potential for identity sprawl increases, creating blind spots and inconsistent configurations. Consequently, CISOs are expected to prioritize a unified perspective on identity risk, enhancing the detection of misconfigurations and unusual access patterns.
As the cybersecurity landscape evolves with the rapid expansion of AI applications, organizations must adapt to these emerging challenges and maximize their incident response capabilities. By embracing comprehensive governance and effective risk management strategies, they can navigate the complexities of AI-related incidents while also addressing compliance and data security concerns.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































