Connect with us

Hi, what are you looking for?

AI Cybersecurity

AI Integration in OT Heightens Cybersecurity Risks, Warns e2e-assure CEO

e2e-assure CEO warns that AI’s rapid integration into operational technology is heightening cybersecurity risks, urging stricter measures to safeguard critical infrastructure.

Concerns over cybersecurity are escalating as artificial intelligence (AI) becomes more prevalent in operational technology (OT) environments. Industry leaders caution that the integration of AI into critical infrastructure is creating new systemic risks that could have significant implications for safety and security.

Many organizations in industrial sectors are increasingly adopting AI to enhance efficiency through methods such as predictive maintenance, anomaly detection, and optimization tools. However, Rob Demain, Chief Executive Officer at e2e-assure, warns that security protocols are not keeping pace with this rapid adoption. He highlighted the potential for AI to introduce model drift and misgeneralization into OT settings, which may result in unsafe decision-making and the bypassing of established safety processes if AI recommendations override manual checks.

The connectivity associated with AI, particularly through the use of application programming interfaces and cloud services, is increasing the number of entry points into OT networks, complicating the security landscape for operators of critical infrastructure. This added complexity raises the stakes for cybersecurity, as more vulnerabilities emerge.

While the current prevalence of AI within OT remains relatively limited, several organizations are beginning to test large language model (LLM)-based assistants designed to support engineering and operational tasks. Demain notes that there are clear signs that malicious actors are already utilizing AI to enhance their cyber attack tactics. He emphasized that the deployment of AI in cyber attacks is not merely theoretical, as attackers are employing it to improve productivity and generate dynamic commands, thereby making detection increasingly challenging.

Evidence suggests that AI is enabling the development of polymorphic malware, which can disguise its communications by blending into legitimate traffic. This ability allows malicious activities to circumvent traditional OT security measures, such as signature-based detection and static indicator of compromise (IOC) matching. As a result, the landscape for defenders has become more intricate.

According to Demain, defenders must scrutinize both external LLM API traffic and internal model operations with the utmost diligence. He raised concerns regarding local LLMs, highlighting that these models often contain sensitive data that could be exploited by attackers. The models themselves could act as blueprints for cybercriminals aiming to escalate their attacks.

The evolving tactic of “Living off the land,” which involves using legitimate tools and functions to conduct attacks, is being redefined by some researchers as “Living off the LLM.” This shift indicates that attackers are increasingly leveraging AI-native capabilities for covert actions within OT environments, posing new challenges for cybersecurity defense strategies.

In response to these concerns, the United States Cybersecurity and Infrastructure Security Agency (CISA) recently issued guidance urging that AI systems be segregated from OT networks. This involves ensuring that AI systems receive only read-only data feeds while maintaining a clear data flow from OT to IT without allowing AI any visibility or control over OT systems.

Despite these recommendations, Demain expresses concern that regulatory guidance may not be stringent enough. He described the current advice as conservative, suggesting that a more robust stance is warranted to safeguard critical operations. “The latest advice from CISA is good in terms of keeping AI away from OT—providing a read-only data feed to it, sending data safely from OT to IT but not including AI where it could see/control OT systems,” Demain stated. “I do think they could go harder and discourage AI use on anything connected to OT. Safety first should mandate that these systems should be treated as a safety risk to operations at this stage.”

As organizations grapple with the intersection of AI and operational technology, the need for advanced security measures becomes increasingly apparent. With the stakes higher than ever, industry leaders and cybersecurity experts must collaborate to address these emerging threats and navigate the complexities introduced by this transformative technology.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

AI Government

India's AI workforce is set to double to over 1.25 million by 2027, but questions linger about workers' readiness and job security in this...

AI Education

EDCAPIT secures $5M in Seed funding, achieving 120K page views and expanding its educational platform to over 30 countries in just one year.

Top Stories

Health care braces for a payment overhaul as only 3 out of 1,357 AI medical devices secure CPT codes amid rising pressure for reimbursement...

Top Stories

DeepSeek introduces the groundbreaking mHC method to enhance the scalability and stability of language models, positioning itself as a major AI contender.

AI Regulation

2026 will see AI adoption shift towards compliance-driven frameworks as the EU enforces new regulations, demanding accountability and measurable ROI from enterprises.

Top Stories

AI stocks surge 81% since 2020, with TSMC's 41% sales growth and Amazon investing $125B in AI by 2026, signaling robust long-term potential.

Top Stories

New studies reveal AI-generated art ranks lower in beauty than human creations, while chatbots risk emotional dependency, highlighting cultural impacts on tech engagement.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.