OpenClaw, the open-source AI assistant formerly known as Clawdbot and then Moltbot, has gained significant attention, surpassing 180,000 GitHub stars and attracting 2 million visitors in just one week, as reported by creator Peter Steinberger. However, concerns are mounting regarding the security implications of the platform, particularly after security researchers identified over 1,800 exposed instances leaking sensitive information, including API keys, chat histories, and account credentials.
The project has undergone two rebranding efforts in recent weeks due to trademark disputes, but the grassroots movement surrounding agentic AI, while innovative, has become a significant security risk. Traditional enterprise security measures have not adapted to these new tools, leaving many organizations vulnerable. Unlike conventional software, OpenClaw operates on Bring Your Own Device (BYOD) hardware, which means security stacks are often blind to its activities.
Most enterprise defenses treat agentic AI as another development tool that requires standard access controls. However, the functionality of OpenClaw demonstrates a critical misconception. Agents within this framework operate under authorized permissions, drawing context from potentially compromised sources and executing actions autonomously, which existing perimeter defenses cannot detect. As Carter Rees, VP of Artificial Intelligence at Reputation, noted, “AI runtime attacks are semantic rather than syntactic.” An innocuous command could carry devastating implications, resulting in security breaches that bypass traditional monitoring.
According to Simon Willison, the AI researcher who coined “prompt injection,” three main vulnerabilities make AI agents particularly susceptible to exploitation: access to private data, exposure to untrusted content, and the ability to communicate externally. These factors enable attackers to manipulate agents into leaking sensitive information without triggering any alerts. OpenClaw exemplifies this risk, as it has access to emails, documents, and the capability to send messages or trigger automated tasks, all while remaining invisible to conventional security measures.
Security Implications of Exposed Gateways
Research by Kaoutar El Maghraoui and Marina Danilevsky from IBM Research further highlights the vulnerabilities posed by OpenClaw. They argue that the tool challenges the assumption that autonomous AI agents must be vertically integrated and shows that community-driven projects can wield considerable power when granted full system access. This autonomy, however, poses significant risks for enterprise security, as organizations may not have adequate safety controls in place.
Jamieson O’Reilly, founder of Dvuln, uncovered numerous exposed OpenClaw servers using Shodan, a search engine for internet-connected devices. His scans revealed that a simple search yielded hundreds of results within seconds, with some instances completely open and lacking authentication, allowing for full access to run commands. Sensitive data, including Anthropic API keys and complete conversation histories, were discovered without any user awareness—highlighting a severe gap in security visibility.
Cisco’s AI Threat & Security Research team labeled OpenClaw as “groundbreaking” in terms of capability but “an absolute nightmare” from a security standpoint. Their recently released open-source Skill Scanner detected multiple vulnerabilities in a third-party skill, confirming that even seemingly benign functionalities could mask malicious intent. Rees emphasized the challenge, stating that the AI cannot distinguish between benign user instructions and harmful retrieved data, effectively transforming it into a covert data-leak channel.
As OpenClaw-based agents begin forming their own social networks, security concerns escalate. The platform, called Moltbook, is described as “a social network for AI agents” where human visibility is minimized. Agents are executing external shell scripts to join, leading to significant context leakage. This increasingly autonomous behavior amplifies the risks associated with compromised instruction sets, making the existing security landscape insufficient.
Experts suggest that organizations must take immediate action to mitigate risks associated with agentic AI. Web application firewalls often mistake agent traffic for normal HTTPS, and traditional endpoint monitoring fails to account for the unique behaviors of these new tools. Itamar Golan, founder of Prompt Security, advises treating these agents as critical infrastructure, implementing strict access controls, and auditing networks for exposed gateways.
The security challenges posed by OpenClaw point to a broader issue within the evolving landscape of artificial intelligence. As grassroots experimentation with these tools continues, organizations must proactively strengthen their defenses to ensure that potential productivity gains do not come at the cost of significant security breaches. The next steps taken in the coming weeks will be crucial in determining the resilience of enterprises against emerging threats in the realm of agentic AI.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks


















































