South Korea is set to enforce its Artificial Intelligence Act on Thursday, marking a significant milestone as the first country to establish formal safety requirements for high-performance AI systems. This legislation aims to foster growth in the domestic AI sector while introducing essential safeguards to mitigate risks associated with powerful AI technologies, according to the Ministry of Science and ICT.
The act is described as a world-first legislative initiative that includes legal safety obligations for frontier AI. “This is not about boasting that we are the first in the world,” said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry, during a briefing with reporters in Seoul. “We’re approaching this from the most basic level of global consensus.”
The legislation lays the foundation for a comprehensive national-level AI policy framework. Key components include the formation of a central decision-making body—the Presidential Council on National Artificial Intelligence Strategy—and the establishment of an AI Safety Institute responsible for overseeing safety assessments. The law also proposes a suite of support measures aimed at enhancing research and development, improving data infrastructure, and providing training for talent, startup assistance, and guidance for international expansion.
To ease the initial burden on businesses, the government plans to implement a grace period of at least one year, during which it will not conduct fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education, with a dedicated AI Act support desk available to assist companies in understanding their obligations. Officials have indicated that the grace period may be extended in response to evolving international standards and market conditions.
The law specifically targets three areas: high-impact AI, safety obligations for high-performance AI, and transparency requirements for generative AI. High-impact AI refers to fully automated systems deployed in critical sectors such as energy, transportation, and finance, where decisions made without human intervention could significantly impact individuals’ rights or safety. Currently, the government states that no domestic services fall into this category, although fully autonomous vehicles at level 4 or higher could qualify in the future.
What differentiates South Korea’s approach from that of the European Union is its definition of “high-performance AI.” While the EU emphasizes application-specific risk for AI used in sectors like healthcare, recruitment, and law enforcement, South Korea applies certain technical thresholds. This includes metrics such as cumulative training computation, meaning only a limited set of advanced models would be subject to safety requirements. As it stands, the government believes that no existing AI models, domestically or internationally, meet the criteria for regulation under this aspect of the law.
In contrast, the EU is gradually implementing its own AI regulations, with some measures accompanied by multiyear transition periods. Enforcement of the Korean law will be relatively lenient, as it does not impose criminal penalties. Instead, the act prioritizes corrective orders for noncompliance, with fines—capped at 30 million won ($20,300)—only applied if those orders are ignored. This reflects a compliance-oriented approach rather than a punitive one, according to government officials.
Transparency obligations for generative AI in South Korea align closely with EU regulations but are applied more narrowly. Content that could be mistaken for real—like deepfake images, video, or audio—must clearly disclose its AI-generated origin. For other types of AI-generated content, invisible labeling through metadata is permitted, with personal or noncommercial use of generative AI exempt from regulation.
Kim emphasized that the intent behind the legislation is not to stifle innovation but to create a regulatory framework that addresses growing public concerns. “The goal is not to stop AI development through regulation,” he said. “It’s to ensure that people can use it with a sense of trust.” He noted that the law should be viewed as a starting point rather than a final product, stating, “The legislation didn’t pass because it’s perfect. It passed because we needed a foundation to keep the discussion going.”
Recognizing the apprehensions of smaller firms and startups, Kim assured that the government would remain engaged throughout the implementation process. “We know smaller companies and ventures have their own worries,” he said. “As issues come up, we’ll work through them together via the support center.”
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































