The rapid adoption of OpenClaw in China is challenging Beijing’s regulatory framework for artificial intelligence, as the government balances its drive for innovation with rising concerns over data security. In recent weeks, tech giants and consumers have embraced the autonomous AI agent, prompting lines of hundreds outside tech stores in Shenzhen seeking assistance with the software installation.
However, the swift uptake has raised alarms among officials, leading to a warning issued on Wednesday to state-run enterprises and government agencies against loading OpenClaw onto office computers. The Chinese government aims to position itself as a global leader in AI, promoting its integration across various sectors to revolutionize both industries and daily life. This latest directive highlights the tension between such ambitions and the Communist Party’s instinct for maintaining stability and control.
“Chinese regulators typically respond with extraordinary speed to threats from emerging technologies, but the rate of adoption of OpenClaw and other agentic tools is still outpacing them,” noted Kendra Schaefer, partner and director of tech policy research at Trivium China. OpenClaw, an open-source autonomous AI agent developed by an Austrian, can perform tasks ranging from cleaning up emails to managing calendars and checking in for flights. The practice of installing this AI has even garnered a colloquial nickname in China: “raising lobsters.”
Major tech firms including Tencent, Alibaba, MiniMax, and Baidu have rolled out tools compatible with OpenClaw. Local governments in cities like Shenzhen, Wuxi, and Hefei have also announced substantial subsidies for startups leveraging the platform. Yet, the software demands extensive access to private data and can establish external communications, raising concerns about potential security vulnerabilities.
Yin Tongyue, chairman of Chery Automobile Co. Ltd, one of China’s leading electric vehicle manufacturers, urged caution during the frenzy, advising his team to delay the installation of OpenClaw until a training regimen could be established. “I said that everyone should hold off on the installation for now and we can have a focused training program. We should be willing to embrace new things, but not follow the crowd blindly… it can lead to some risks beyond our imagination,” he stated.
Beijing has previously expressed concerns regarding foreign entities targeting sensitive datasets, including geospatial and genetic information. The rapid adoption of OpenClaw has intensified the urgency for a regulatory response. While the Chinese government has avoided implementing a sweeping AI law, it has introduced ad-hoc measures since 2022, focusing on specific challenges such as algorithm recommendations and deepfake content. In a pioneering move, it mandated labels for AI-generated content last year.
“Beijing’s biggest challenge in regulating AI is the same one all governments face: the technology is moving so quickly that a regulation could be out of date before the ink is dry,” remarked Matt Sheehan, a senior fellow at the Carnegie Endowment for International Peace. Currently, there are no regulations specifically addressing the use of OpenClaw or similar software, raising questions about accountability for AI agents’ actions. On Wednesday, the China Academy of Information and Communications Technology announced plans to assess the trustworthiness of AI agents like OpenClaw beginning in late March, with intentions to establish a set of standards for their usage.
Ryan Xie, a teacher in Jiangmen, has been utilizing OpenClaw for repetitive tasks, voicing concerns about security. “That’s why I look for workarounds — like running OpenClaw within a Docker container and a Sandbox, or configuring specific rules to restrict it from overstepping its bounds,” he explained. This vigilance reflects broader anxieties within the tech sector, particularly as Beijing aims to boost the value added by core digital economy industries to 12.5% of GDP by 2030, up from 10.5% last year.
However, the rise of AI tools like OpenClaw brings additional challenges related to social stability. The automation enabled by such technologies poses a threat to the world’s largest labor force, particularly in a country grappling with a fragile job market and youth unemployment rates lingering above 15% for the past six months. Lu Jianhua, an academic at the Chinese Academy of Sciences, shared his experience using AI to streamline research on low-altitude economic infrastructure, work that once demanded a team. “AI serves as a very capable assistant — equivalent to several human assistants,” he noted.
A recent study from Peking University examining over a million job postings revealed that sectors most susceptible to AI, such as accounting, editing, and programming, are already witnessing a decline in recruitment. In January, the Ministry of Human Resources announced it was drafting policy guidance to address the implications of AI on employment, but provided no timeline for its release. Victor Chen, a fintech worker in Guangzhou who has utilized OpenClaw for various projects, echoed concerns about potential job losses. “The more significant underlying factor is that the government — like elsewhere in the world — is not actually ready to deal with AI-driven mass unemployment and the social unrest it might cause,” he said.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































