Security teams are grappling with an increasingly urgent challenge as artificial intelligence (AI) accelerates the speed and scale of cyberattacks. According to Mike Nichols, Elastic’s global general manager of security, attackers are now capable of moving from initial compromise to actual system impact in as little as 11 minutes. This rapid progression creates a significant operational hurdle for teams still relying on manual detection and response methods.
“At this speed, manual playbooks are no longer just slow, they’re mathematically impossible,” Nichols stated during his address at Elastic{ON} in Sydney. He emphasized that the growing use of AI by attackers is lowering the barrier to entry for conducting sophisticated cyber activities. AI is not only used for identifying vulnerabilities but also for generating exploits and automating aspects of attack development that previously required specialized expertise.
“We were already underwater in security,” Nichols remarked. “Now we’re at the bottom of the Mariana Trench.” However, he cautioned against the narrative that AI could entirely replace security analysts. “The first thing I always say is that AI is icing on the cake, not the entire cake,” he added. “You still need a strong foundation first: processes, people, and an architecture that works without AI. Then AI makes those systems better.”
This transition is forcing organizations to rethink their Security Operations Centers (SOCs). Many SOC teams currently rely on analysts to manually triage thousands of alerts generated across various endpoints, cloud environments, and networks. Nichols pointed out that this model is becoming increasingly unsustainable as the volume of attacks continues to escalate. AI can analyze large amounts of telemetry data and automatically highlight the most relevant threats for analysts to investigate.
“Many SOC teams are staffed with people who should be detectives,” Nichols said. “But we make them act like beat cops writing traffic tickets.” The aim, according to Nichols, is to enable analysts to concentrate on investigative work while AI manages repetitive tasks such as data correlation, alert aggregation, and initial triage.
Furthermore, organizations must not view AI as a mere add-on to existing systems. “You can’t just place a large language model on top of your data and expect everything to work,” Nichols asserted. “AI is fundamentally a data problem.”
Jeremy Pell, Elastic’s ANZ country manager, echoed Nichols’ sentiments, noting that many organizations are under pressure from executives and boards to develop tangible AI strategies. “Engineers and developers have one of the toughest jobs in the industry right now,” Pell said, addressing the attendees. “You are on the front line of what may be the biggest transformation our industry has ever experienced.”
Pell indicated a shift among organizations from experimental AI usage to practical deployment. “We’re moving into a new era, from AI hype to AI help,” he noted. Executives are increasingly seeking strategies that propel their businesses forward, rather than just a theoretical AI initiative. However, Pell warned that many early AI efforts falter because organizations underestimate the complexity of their data environments.
“You need to capture and unify all your data, whether it sits on-premises, in the cloud, in structured formats, or increasingly in unstructured formats,” Pell explained. “If your AI system only sees part of the data, it only tells part of the story.” The reliability of AI outputs ultimately influences whether organizations trust the technology, he added. “If those systems produce incorrect answers, you quickly erode trust, from users, customers, and executives. Without that trust, your AI strategy simply won’t succeed.”
The security implications of inadequate data visibility are becoming increasingly apparent as attackers adopt AI-driven tools. Nichols remarked that AI has greatly accelerated vulnerability discovery and the creation of exploit techniques, thereby reducing the sophistication required to execute attacks. In response, defenders must increasingly leverage AI-assisted analysis to manage extensive volumes of security telemetry. This involves correlating data across diverse environments to identify attack patterns that may not be visible within individual systems.
While the focus at Elastic{ON} largely centered on security operations, the company also highlighted that similar data challenges are emerging across customer-facing digital systems. New research from Elastic indicates that 72 percent of Australian online shoppers have abandoned a brand due to poor website search experiences. This underscores the growing importance of AI-powered search capabilities in the retail sector.
More than 62 percent of shoppers now expect brand search tools to exhibit the same intelligence as generative AI systems, with over half of younger consumers increasingly turning to natural-language queries rather than traditional keywords. Pell noted that these evolving expectations illustrate how AI is raising the bar across digital experiences. “Search is no longer a utility feature; instead, it’s a revenue driver,” he stated. Retailers failing to offer intelligent search experiences risk losing customers to competitors, particularly when external search engines guide users to rival brands.
Ultimately, Pell emphasized that the key to navigating this complex landscape lies in the ability to access the right data at the right time. “Helping organizations achieve real business outcomes from AI is the real challenge ahead,” he concluded.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks



















































