Connect with us

Hi, what are you looking for?

AI Cybersecurity

AI in Cyber Security: Experts Warn Against Overhyped Solutions Amid Growing Risks

CISOs face pressure to adopt AI in cybersecurity as Gartner predicts 70% of large SOCs will pilot AI agents by 2028, yet only 15% may see real improvements.

The surge in interest surrounding artificial intelligence (AI) is reshaping the cybersecurity landscape, as companies are increasingly integrating AI solutions into their security operations. Executives are keen to leverage AI’s potential to enhance innovation and mitigate risk, but experts warn of the associated challenges that come with these technologies.

Ellie Hurst, commercial director at Advent IM, highlights a growing trend where procurement teams are incorporating AI clauses into contracts. Chief Information Security Officers (CISOs) face mounting pressure to implement AI solutions, creating fertile ground for heightened marketing efforts, including an influx of webinars and bold claims about automating security operations centers (SOCs). Hurst cautions that while the fear, uncertainty, and doubt (FUD) surrounding AI-powered cyber attacks is real, it can also lead to rushed purchasing decisions for tools that may not yet prove effective.

Hurst urges IT security leaders to thoroughly assess the maturity of AI features in cybersecurity products to avoid introducing new risks. “Some AI features genuinely save analyst time or improve detection. Others are little more than chatbots bolted onto dashboards,” she explains. Richard Watson-Bruhn, a cybersecurity expert at PA Consulting, notes that AI accelerators offered in cybersecurity tools are often available as software as a service (SaaS) add-ons, aimed at reducing repetitive workloads.

Watson-Bruhn further underscores a category of AI cybersecurity tools designed for enterprises needing trusted outputs within their corporate networks. “Use enterprise AI when the work spans multiple teams, touches sensitive data, or your policies need it to run the same way every time,” he advises. Despite the proliferation of AI-enhanced cybersecurity tools, Aditya K Sood, vice-president of security engineering and AI strategy at Aryaka, emphasizes that CISOs must discern genuine AI value from marketing hype.

Sood points out that while machine learning (ML) has long been integral to spam filters and anomaly detection systems, the advent of large language models (LLMs) and more accessible AI tools is altering how security teams interact with data. “This shift has changed how security teams interact with data – summaries instead of raw logs, conversational interfaces instead of query languages, and automated recommendations instead of static dashboards,” he notes. However, he warns that the assumption of enhanced security simply due to increased AI integration is a misconception. “The mistake many organizations make is assuming that more AI automatically equals better security. It doesn’t,” he cautions.

Sood insists that sound IT security architecture remains paramount. “An AI bolted onto a weak security foundation won’t save you,” he states, highlighting that inadequate identity management or fragmented network visibility can lead to unreliable AI outputs. He advises that AI should amplify existing security fundamentals rather than replace them.

Hurst recommends that IT buyers focus on desired outcomes and threat models rather than just product features. “Anchor decisions to your top risks,” she says, pointing to common challenges like alert overload and slow incident investigations. She emphasizes the importance of addressing real problems rather than being swayed by the allure of AI capabilities. “Don’t buy an ‘AI cyber tool’ because it sounds clever. Buy something because it fixes a real problem you already have,” she adds.

Gartner predicts that 70% of large SOCs will pilot AI agents by 2028, although only 15% may achieve measurable improvements without structured evaluations. Craig Lawson, a Gartner vice-president analyst, acknowledges the potential of AI agents to streamline operations but emphasizes the need for rigorous evaluation to achieve meaningful results. “Today’s reality is one of collaboration – AI agents are emerging as powerful facilitators, not autonomous replacements,” he explains.

Despite the promise of AI in security operations, several barriers hinder its effective deployment. Gartner anticipates that 45% of SOCs will reconsider their AI detection technology strategies by 2027. Lawson notes that issues like poor interoperability and workflow inefficiencies could complicate integration, potentially resulting in new siloes within security operations.

In light of these developments, IT buyers must exercise caution when evaluating AI functionalities in cybersecurity products. Hurst advises organizations to ensure they have an exit strategy, emphasizing the importance of avoiding proprietary black boxes and ensuring data control. Lawson also highlights the need for seamless integration with existing SOC technologies, advocating for measurable outcomes such as reductions in mean time to repair and improvements in analyst workload.

In conclusion, as AI continues to transform the cybersecurity landscape, IT leaders must navigate the complexities of these technologies with a focus on solid security foundations and real-world effectiveness. The integration of AI should not be a shortcut but rather an enhancement of established cybersecurity practices, ensuring that organizations can effectively combat evolving threats in an increasingly digital world.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Business

CISOs face urgent challenges as AI usage skyrockets 61 times from 2023 to 2025, exposing organizations to unprecedented cybersecurity risks.

AI Cybersecurity

Hadrian warns that by 2026, a staggering 99.5% of security alerts will be false positives, leaving organizations vulnerable to AI-driven cyberattacks.

AI Cybersecurity

CISO survey reveals 92% of organizations lack AI oversight, with 75% exposed to unapproved "Shadow AI" tools accessing sensitive data.

AI Cybersecurity

73% of CISOs now prioritize AI security solutions, up from 59% last year, highlighting a critical shift towards advanced threat detection amid escalating risks.

AI Cybersecurity

Executives anticipate a 82% boost in cybersecurity budgets but face 75% job cuts as U.S. leaders trust AI tools more than their skeptical U.K....

AI Cybersecurity

CISOs show only 20% confidence in AI enhancing cybersecurity, significantly lower than CEOs at 30%, revealing a critical disconnect in strategic alignment.

AI Cybersecurity

World Economic Forum report reveals 94% of cybersecurity leaders prioritize AI security, with 87% citing AI vulnerabilities as the fastest-growing cyber risk.

Top Stories

C-suite executives prioritize AI adoption, with 43% targeting increased automation by 2030 while facing cost pressures and cybersecurity risks.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.