LONDON — New research argues that the narrative portraying Beijing’s artificial intelligence (AI) oversight as strictly a product of its authoritarian government is overly simplistic. Xuechen Chen, an associate professor in politics and international relations at Northeastern University in London, co-authored a paper that illustrates how traditional Chinese values and commercial interests have played significant roles in shaping AI governance in the country.
The paper, titled “State, society, and market: Interpreting the norms and dynamics of China’s AI governance,” appears in the Computer Law & Security Review. The prevalent view holds that President Xi Jinping and the Chinese Communist Party dictate technology management, creating a top-down approach where dissent and anti-government views face stringent censorship. However, Chen asserts that this interpretation neglects the influence of societal norms and the private sector on entities like ByteDance, the owner of TikTok, and the AI firm DeepSeek.
“What we wanted to do is demonstrate that China’s AI governance, and digital governance more broadly, is not like what people imagine—a top-down, state-driven system where the national government says you should do that, and then you just do it,” Chen explained. “It’s actually not like that because in this whole governance process, there exist a wide range of different stakeholders, including obviously the state, but also the private sectors and then more recently, and I think more importantly, society.”
Chen characterizes these three elements—the state, the private sector, and society—as critical stakeholders in the governance discussion. “They collaborate and then they co-produce these norms and regulatory mechanisms,” she added. A study by Tech Buzz China and Unique Research indicates that 23 of the 100 largest AI products globally by annual recurring revenue are developed by Chinese firms, primarily aimed at overseas markets. The top Chinese companies—Glority, Plaud, ByteDance, and Zuoyebang—reported a combined revenue of $447 million, which still falls short compared to major U.S. players like OpenAI and Anthropic, with estimated revenues of around $17 billion and $7 billion, respectively.
China lacks ratified AI legislation akin to the European Union’s AI Act but operates under a more market-led regulatory model, according to Chen. The Cyberspace Administration of China, the nation’s internet regulator, leads this governance. Critics argue that this approach serves as a guise for state censorship, illustrated by a recent two-month campaign in which the agency threatened “strict punishments” against social media platforms like Weibo for failing to control “negative” content about life in China.
According to Wired, every AI company must register with the Cyberspace Administration and demonstrate their products’ compliance in avoiding risks, from psychological harm to “violating core socialist values.” Chen’s paper notes that China has established formal regulations specifically for generative AI, making it a pioneer in this area. This development comes against the backdrop of Western discussions surrounding AI safety, particularly following incidents like that involving Grok, Elon Musk’s AI on the social media platform X, which created sexualized deepfake images of women and children.
Chinese generative AI regulations prohibit the creation of unlawful or vulgar content to align with “the taste and wider concerns of contemporary Chinese society,” the paper states. “China has also developed arguably one of the most effective and rigorous systems for minor protection in cyberspace, encapsulating gaming, short-video, and generative AI services.” The government updated its comprehensive Minors Protection Law last year to impose online restrictions, limiting minors’ screen time and mandating child-friendly modes from smartphone manufacturers.
Even prior to this legislative update, Chen indicated that AI developers had taken the initiative to self-regulate to avoid conflicts with the government. This need arises from a desire to comply with strict censorship laws; for instance, DeepSeek refrains from responding to prompts critical of Xi’s administration. The second impetus for self-regulation is market-driven, as Chinese culture embodies Confucian values that emphasize family hierarchy. If parents discover their children engaging with inappropriate content, they are likely to intervene, leading them to withdraw support for platforms that fail to filter harmful material.
“If ByteDance does not control the content for kids, then the parents would be furious, and then they would simply just say, ‘No, I’m not going to use your TikTok, and I’m done,’” Chen remarked. “Tech companies don’t want to face this kind of scenario where the consumers are not happy.” Chen acknowledged a broader question regarding the influence of non-state actors within an authoritarian framework but emphasized that their active participation in shaping regulations warrants further research.
“In this paper, what we wanted to demonstrate is that these different actors, they indeed have been actively participating in shaping the regulations and policies and guidelines and standards in the field,” she concluded, underscoring the multifaceted landscape of AI governance in China.
See also
Microsoft’s Mustafa Suleyman Announces Shift to AI Self-Sufficiency, Aims for Superintelligence
India’s AI Landscape at Risk: Nayar Urges Policy Shift to Build Competitive LLMs
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032





















































