On April 3, 2026, China’s Ministry of Industry and Information Technology, in collaboration with nine other government agencies, unveiled the Administrative Measures for the Ethical Review and Services of Artificial Intelligence Science and Technology (Trial), marking a significant step in the nation’s approach to AI governance. This initiative underscores the growing emphasis on ethical oversight in artificial intelligence, which has emerged as a pivotal area of regulatory focus alongside life sciences.
Since the 2022 issuance of the Opinions on Strengthening the Governance of Science and Technology Ethics, China has progressively positioned artificial intelligence as a key domain of governance. In 2023, the Measures for the Ethical Review of Science and Technology established a framework where institutions hold primary responsibility for self-review, supplemented by expert evaluations for high-risk cases. Notably, AI technologies capable of influencing public opinion or engaging in highly autonomous decision-making are classified as high-risk, necessitating mandatory scrutiny by third-party experts.
The newly introduced Administrative Measures signify an evolution in China’s AI ethics governance, emphasizing both professionalization and service provision. Companies are now required to demonstrate proof of ethical review during the algorithm filing process, introducing a dual-track access model that combines “algorithm filing” with “ethical evaluation.” Regulatory focus has expanded beyond mere content security to include broader societal and labor protections. For instance, mechanisms for “algorithm auditing” in sectors like ride-hailing and food delivery require algorithmic systems to incorporate human override functions to mitigate potential exploitation of workers.
This transformation signals a shift from a regulatory approach concentrated solely on content and security to a more comprehensive, operational, and auditable ethical compliance system embedded within China’s broader technological governance framework. The three-tier design includes internal ethics committees within organizations, external service centers, and government-led expert reviews, mandating that all universities, research institutions, and companies engaged in AI development establish ethics committees and assume primary responsibility for ethical oversight.
When internal resources are insufficient, organizations are permitted to delegate responsibilities to external ethics review service centers. High-risk projects, particularly those impacting public opinion or involving automated decision-making, must undergo government-led expert reviews. This design embeds ethical governance in organizational structures while maintaining ultimate state oversight.
Operationally, the Measures outline a quasi-administrative approval process requiring applicants to submit detailed proposals, including technical plans, data sources, and ethical risk assessments, before project initiation. Review bodies are mandated to issue decisions within 30 days, with the authority to request revisions or reject applications outright. This ongoing monitoring reflects a type of “dynamic regulation” akin to that applied in pharmaceutical or medical research, ensuring that AI projects are subject to continuous oversight.
The Measures also establish a comprehensive indicator system for AI governance in China, focusing on six key dimensions: promoting social well-being, preventing algorithmic discrimination, ensuring system reliability, maintaining transparency, tracing accountability, and protecting privacy. Although these criteria align with Western frameworks—such as the OECD principles and the EU AI Act—the emphasis in implementation places a heavier focus on controllability and risk prevention, indicative of a more engineering-oriented governance approach.
Unique to this framework is the integration of a systematic service provision mechanism alongside oversight. This dual model not only delineates compliance boundaries through tools such as ethical reviews but also enhances risk management capabilities for enterprises via ethics review service centers. As a result, AI ethics transitions from a compliance threshold to a capability that can be outsourced, reflecting a governance strategy that both mitigates risk and fosters technological development.
The ethical governance of artificial intelligence in China is now structured to foster responsible innovation while safeguarding human dignity, public order, and sustainable development. The Measures are designed to adapt dynamically to emerging challenges, ensuring the ethical implications of AI technology remain at the forefront as the industry evolves. As China continues to refine its approach to AI ethics, the broader implications for global technology governance may become increasingly significant.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































