China’s cyberspace regulator, the Cyberspace Administration of China (CAC), has unveiled a set of draft measures aimed at managing anthropomorphic artificial intelligence (AI) interaction services, inviting public feedback on the proposals. The guidelines, announced on Saturday, emphasize a regulatory approach that is inclusive and tiered, providing differentiated oversight based on identified risk levels. They are designed to foster innovation while implementing safeguards against potential abuses.
The measures target AI products and services that simulate human personality traits and communication styles, facilitating emotional interaction with users through various formats such as text, images, audio, or video. In a significant stipulation, the draft explicitly bans the generation or dissemination of content deemed harmful to national security, including materials that undermine ethnic unity or promote illegal activities. It also prohibits content that glamorizes suicide, self-harm, or any form of abuse that could jeopardize the physical or mental health of users.
Service providers are mandated to inform users when they are interacting with AI rather than humans. Additionally, the regulations call for pop-up reminders if users exhibit signs of overdependence on the service, particularly upon first login or re-entry. Lin Wei, president of Southwest University of Political Science and Law, highlighted in an explanatory article that while AI technology is evolving, creating more personalized interactions, it also introduces new risks that could infringe upon citizens’ rights and trust within society.
Lin elaborated that the draft aligns with national strategic priorities in AI governance, offering a systematic approach to manage the emotional interactions that anthropomorphized AI can provoke. He noted that the framework aims to clarify the boundaries of responsibility and ensure that technological advancements remain safe, fair, and sustainable. By establishing a multi-dimensional risk prevention framework, the measures seek to embed accountability throughout key stages of AI service development and deployment.
The draft regulations are particularly focused on the emotional aspects of human-machine interaction, addressing the potential blurring of boundaries between humans and AI. Lin stressed that clearer guidelines can facilitate the responsible development of anthropomorphic AI services in China, providing a forward-looking framework that could serve as a reference for global governance of similar technologies.
As AI technologies continue to advance, the implications of these draft measures could resonate beyond China’s borders, influencing international standards and practices in AI governance. The CAC’s efforts underscore the dual objectives of fostering innovation while protecting societal interests, an ongoing challenge in the rapidly evolving tech landscape.
For more information on the draft measures, visit the official website of the Cyberspace Administration of China.
See also
Mistral Reveals Beta Workflows Tool, Challenging LangChain’s Middleware Dominance
AI Traffic Innovations Slash Fatalities, Enhance Safety in Urban Mobility Revolution
Pew Research: 64% of US Teens Use AI Chatbots, Raising Mental Health Concerns
AI Breakthroughs: Machine Vision Transforms Industries, Raises Ethical Concerns by 2025



















































