Connect with us

Hi, what are you looking for?

AI Regulation

Custom AI Will Drive 50% of Cyber Incidents by 2028, Warns Gartner Report

Gartner forecasts that by 2028, 50% of enterprise cybersecurity incident responses will focus on custom-built AI applications, escalating risks and compliance challenges.

Custom-built AI applications are increasingly shaping the landscape of cybersecurity, with Gartner projecting that by 2028, half of enterprise cybersecurity incident response initiatives will focus on these systems. This shift comes as organizations adopt AI-driven software across various business processes and customer-facing services, often deploying applications before completing necessary testing and security reviews.

Christopher Mixter, a VP Analyst at Gartner, emphasized the urgency of this transition. “AI is evolving quickly, yet many tools—especially custom-built AI applications—are being deployed before they’re fully tested,” he noted. The complexity and dynamism of these systems pose significant security challenges over time. Currently, most security teams lack established processes for handling AI-related incidents, which can prolong resolution times and escalate required effort.

Traditionally, incident response teams have focused on detection, containment, eradication, and recovery. However, AI systems introduce additional variables, such as model behavior, data handling, and integration with other services, complicating investigations. An issue may be a security breach, a software defect, a data quality problem, or a combination of these, making it difficult for teams to pinpoint the cause.

In response to these challenges, Gartner anticipates broader adoption of AI security platforms. The firm predicts that over 50% of enterprises will utilize these platforms by 2028 to manage both third-party AI services and custom-built applications. Organizations are increasingly seeking a unified management layer to oversee multiple AI tools, as concerns about prompt injection attacks, data misuse, and inconsistent controls among departments grow. Governance demands are also escalating as business units rapidly implement new AI features.

Security leaders are advised to evaluate whether their tools cover both in-house and external AI usage, ensuring visibility into AI activities and enforcing policies across custom systems and vendor-provided services. As organizations navigate this complex landscape, the need for effective governance and risk management becomes paramount.

Gartner’s findings also highlight the rising pressure of compliance, forecasting that by the end of 2027, 75% of regulated organizations will face fines exceeding 5% of global revenue due to inadequate manual compliance processes related to AI. This prediction reflects a regulatory environment that is continuously evolving across regions. Although requirements differ, Gartner expects a convergence around structured AI risk management, intensifying the pressure on organizations that still depend on spreadsheets and ad hoc evidence collection for compliance reporting.

The introduction of AI safety regulations adds another layer of complexity for risk and compliance teams, which are already managing security, privacy, and cyber risk requirements. AI-specific regulations extend the focus to include model risk, data provenance, and ongoing monitoring, complicating the compliance landscape.

Furthermore, through 2030, Gartner anticipates that 33% of IT work will be dedicated to remediating AI data debt—weaknesses in the datasets that organizations rely on. This term encompasses unstructured or poorly secured information across file shares, SaaS platforms, and legacy systems. As AI features increasingly allow access to internal data stores, gaps in data classification and access control become more apparent. Data loss prevention programs are broadening their scope to encompass AI-driven data flows, including monitoring requests generated by generative AI tools.

As organizations grapple with these challenges, the focus on data sovereignty within cloud security is expected to increase. By 2027, Gartner forecasts that 30% of organizations will demand comprehensive sovereignty over their cloud security controls, driven by geopolitical uncertainties. This focus includes assessing where data is stored, who can access it, and how cloud security is managed across borders. Such considerations will influence vendor selection and contract terms, particularly for services reliant on cloud-hosted control planes.

Identity management also remains a significant concern. Gartner predicts that by 2028, 70% of Chief Information Security Officers (CISOs) will employ identity visibility and intelligence capabilities to minimize the attack surface associated with identity and access management. As organizations incorporate cloud services, automation, and multiple identity tools, the potential for identity sprawl increases, creating blind spots and inconsistent configurations. Consequently, CISOs are expected to prioritize a unified perspective on identity risk, enhancing the detection of misconfigurations and unusual access patterns.

As the cybersecurity landscape evolves with the rapid expansion of AI applications, organizations must adapt to these emerging challenges and maximize their incident response capabilities. By embracing comprehensive governance and effective risk management strategies, they can navigate the complexities of AI-related incidents while also addressing compliance and data security concerns.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

The multimodal imaging market is set to surge from $4.52 billion in 2025 to $7.43 billion by 2035, driven by AI innovations and rising...

AI Finance

Alltegrio leads the charge in custom AI solutions for finance, integrating tools that enhance compliance and risk management, essential for error-prone transactions.

AI Education

Dartmouth's partnership with AI firm Anthropic faces backlash over its ties to Pentagon operations, as Claude's technology linked to 1,000 military strikes raises ethical...

AI Research

Anthropic establishes the Anthropic Institute, led by Jack Clark, to confront economic and societal challenges of advanced AI systems, anticipating significant breakthroughs.

Top Stories

A recent Echelon Insights survey reveals 80% of parents demand stricter AI safeguards in schools, with 86% supporting pop-up warnings for sensitive content.

AI Cybersecurity

Cybersecurity experts reveal a staggering 66% governance gap in AI deployment, with only 7% of organizations enforcing real-time security policies despite a 90% budget...

AI Business

Alibaba unveils Wukong, a beta AI platform for businesses that automates complex tasks like document editing and meeting transcriptions, enhancing operational efficiency.

AI Cybersecurity

IBM's X-Force reveals that AI-generated malware Slopoly enables cybercriminals to automate attacks, shortening hacking lifecycles and complicating cybersecurity defenses.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.