The surge in interest surrounding artificial intelligence (AI) is reshaping the cybersecurity landscape, as companies are increasingly integrating AI solutions into their security operations. Executives are keen to leverage AI’s potential to enhance innovation and mitigate risk, but experts warn of the associated challenges that come with these technologies.
Ellie Hurst, commercial director at Advent IM, highlights a growing trend where procurement teams are incorporating AI clauses into contracts. Chief Information Security Officers (CISOs) face mounting pressure to implement AI solutions, creating fertile ground for heightened marketing efforts, including an influx of webinars and bold claims about automating security operations centers (SOCs). Hurst cautions that while the fear, uncertainty, and doubt (FUD) surrounding AI-powered cyber attacks is real, it can also lead to rushed purchasing decisions for tools that may not yet prove effective.
Hurst urges IT security leaders to thoroughly assess the maturity of AI features in cybersecurity products to avoid introducing new risks. “Some AI features genuinely save analyst time or improve detection. Others are little more than chatbots bolted onto dashboards,” she explains. Richard Watson-Bruhn, a cybersecurity expert at PA Consulting, notes that AI accelerators offered in cybersecurity tools are often available as software as a service (SaaS) add-ons, aimed at reducing repetitive workloads.
Watson-Bruhn further underscores a category of AI cybersecurity tools designed for enterprises needing trusted outputs within their corporate networks. “Use enterprise AI when the work spans multiple teams, touches sensitive data, or your policies need it to run the same way every time,” he advises. Despite the proliferation of AI-enhanced cybersecurity tools, Aditya K Sood, vice-president of security engineering and AI strategy at Aryaka, emphasizes that CISOs must discern genuine AI value from marketing hype.
Sood points out that while machine learning (ML) has long been integral to spam filters and anomaly detection systems, the advent of large language models (LLMs) and more accessible AI tools is altering how security teams interact with data. “This shift has changed how security teams interact with data – summaries instead of raw logs, conversational interfaces instead of query languages, and automated recommendations instead of static dashboards,” he notes. However, he warns that the assumption of enhanced security simply due to increased AI integration is a misconception. “The mistake many organizations make is assuming that more AI automatically equals better security. It doesn’t,” he cautions.
Sood insists that sound IT security architecture remains paramount. “An AI bolted onto a weak security foundation won’t save you,” he states, highlighting that inadequate identity management or fragmented network visibility can lead to unreliable AI outputs. He advises that AI should amplify existing security fundamentals rather than replace them.
Hurst recommends that IT buyers focus on desired outcomes and threat models rather than just product features. “Anchor decisions to your top risks,” she says, pointing to common challenges like alert overload and slow incident investigations. She emphasizes the importance of addressing real problems rather than being swayed by the allure of AI capabilities. “Don’t buy an ‘AI cyber tool’ because it sounds clever. Buy something because it fixes a real problem you already have,” she adds.
Gartner predicts that 70% of large SOCs will pilot AI agents by 2028, although only 15% may achieve measurable improvements without structured evaluations. Craig Lawson, a Gartner vice-president analyst, acknowledges the potential of AI agents to streamline operations but emphasizes the need for rigorous evaluation to achieve meaningful results. “Today’s reality is one of collaboration – AI agents are emerging as powerful facilitators, not autonomous replacements,” he explains.
Despite the promise of AI in security operations, several barriers hinder its effective deployment. Gartner anticipates that 45% of SOCs will reconsider their AI detection technology strategies by 2027. Lawson notes that issues like poor interoperability and workflow inefficiencies could complicate integration, potentially resulting in new siloes within security operations.
In light of these developments, IT buyers must exercise caution when evaluating AI functionalities in cybersecurity products. Hurst advises organizations to ensure they have an exit strategy, emphasizing the importance of avoiding proprietary black boxes and ensuring data control. Lawson also highlights the need for seamless integration with existing SOC technologies, advocating for measurable outcomes such as reductions in mean time to repair and improvements in analyst workload.
In conclusion, as AI continues to transform the cybersecurity landscape, IT leaders must navigate the complexities of these technologies with a focus on solid security foundations and real-world effectiveness. The integration of AI should not be a shortcut but rather an enhancement of established cybersecurity practices, ensuring that organizations can effectively combat evolving threats in an increasingly digital world.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks





















































