As organizations increasingly embrace AI and agentic technologies for productivity enhancement, the need for robust security measures has never been more critical. Moudy Elbayadi, chief AI and innovation officer at Evotek, emphasized the importance of implementing comprehensive security controls to safeguard AI usage. “You’ve got to put in the guardrails,” he stated, advocating for a proactive approach to security that not only enhances productivity but also protects sensitive data.
Elbayadi, a former CTO of Shutterfly and CIO of LifeLock, believes that while AI technologies present significant opportunities, they also introduce complex challenges. Visibility into AI usage is paramount, as organizations face the dual task of overseeing sanctioned tools while deterring unsanctioned “shadow AI” practices. Essential questions arise: “How do you know what AI tools are being used and abused? What agents are running?” he queried. This highlights the necessity for in-depth monitoring of AI interactions with internal systems.
The evolving landscape necessitates a shift to AI-aware data loss prevention, as outlined by Timothy Choquette, CEO of Covenant Technology Solutions. He emphasized the importance of reviewing customer data practices to ensure readiness for implementing AI solutions. This involves deploying AI-aware data loss prevention systems that can analyze user intentions in real-time, responding dynamically to potential risks while safeguarding sensitive information.
Meanwhile, strong identity and authentication protocols are crucial for managing access to AI systems. Damon McDougald, global security services lead at Accenture, underscored the need for continuous authorization and verification processes that extend beyond initial access rights. “You need to be thinking from an identity standpoint of how do [agents] get access to things at different times for different time lengths?” he noted. This perspective reflects a fundamental shift in how organizations approach identity management in the age of AI.
Governance and policy enforcement around AI usage also demand careful consideration. According to solution providers, establishing clear rules regarding who can access AI tools, what data can be utilized, and how outputs are generated is essential. The integration of AI agents complicates this landscape, requiring organizations to enforce policies as a means of ensuring that agents operate within established guidelines. As Ben Prescott, head of AI solutions at Trace3, explained, understanding the objectives of AI agents is pivotal to maintaining control over their operations.
Continuous AI red teaming is another emergent strategy, allowing organizations to assess vulnerabilities in real-time. The complexities associated with GenAI necessitate a move away from traditional point-in-time testing towards automated, ongoing evaluations. Elbayadi noted that new AI-powered tools can continuously probe for weaknesses, adapting to evolving threats while mitigating risks associated with unauthorized agent actions.
Supply chain security also presents vital considerations, particularly regarding the assessment of third-party AI model providers and data sources. Organizations must extend their security controls beyond internal systems, conducting thorough vendor risk assessments to safeguard against compromised models or malicious data inputs. This proactive approach to supply chain security mirrors best practices in traditional software security frameworks, ensuring that AI tools are held to the same stringent standards.
AI risk monitoring is equally essential for identifying performance degradation over time. As Daniel Kendzior, global data and AI security practice leader at Accenture, articulated, AI systems can evolve unpredictably, leading to risks such as model drift or policy violations. Continuous monitoring allows teams to detect these issues early, facilitating timely interventions to preserve the integrity of AI systems.
Finally, organizations must prioritize AI security training for their teams to address the human element of AI risk. Cesar Avila, founder and CIO of AVLA, highlighted the need for ongoing employee education on secure AI usage and risk awareness. “You have to train them and give them an AI policy,” he said, underscoring that comprehensive security measures are futile without informed human oversight.
In summary, as AI technologies continue to reshape the business landscape, robust security controls are essential for ensuring safe and effective utilization. Organizations must navigate complex governance challenges, invest in continuous monitoring, and prioritize human-centric training to harness the full potential of AI while mitigating associated risks.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks
















































