Elon Musk’s artificial intelligence company xAI is currently embroiled in controversy over the proliferation of deepfake images, coinciding with the impending enforcement of South Korea’s Framework Act on the Development of Artificial Intelligence. Set to take effect on January 22, this legislation is touted as the world’s first comprehensive regulatory framework for AI. Observers are keenly interested in whether this law can effectively mitigate the challenges posed by deepfake technologies.
Concerns are mounting regarding the law’s practical implications, especially its ability to regulate xAI’s deepfake services. Reports indicate that the company’s AI model, Grok, is still enabling users on Musk’s social media platform, X, to transform ordinary images into sexually explicit deepfakes. In response, nations such as Malaysia and Indonesia have restricted access to the platform, while others have initiated legal investigations. This has prompted xAI to restrict the deepfake feature to paid subscribers, but worries persist about the risks associated with deepfake content.
The newly enacted law aims to enhance the responsibilities of AI operators and establish a foundation for trust in an AI-driven society, with a specific focus on preventing deepfake-related crimes. Notably, the legislation also encompasses overseas operators, mandating them to designate domestic agents to meet legal obligations. However, the role of these agents lacks clarity regarding the requirement to label deepfakes, raising concerns about potential delays in effectively communicating with foreign entities.
While the act mandates that all AI-generated content must include visible watermarks—difficult to distinguish from genuine images—immediate enforcement is challenged by a minimum one-year grace period. According to analysts, even after this period, the direct blocking or regulation of foreign AI services like xAI could be complicated due to trade friction. “Under current laws, it is hard to do more than impose fines if overseas companies like xAI do not cooperate voluntarily,” stated Jung Chang-woo, an attorney at Lee & Ko.
Experts suggest that, until the new act is fully operational, existing laws such as the Information and Communications Network Act and the Personal Information Protection Act should be utilized to address deepfake incidents. “The binding force of the act alone is weak,” said Yeo Hyun-dong, a lawyer at Yoon & Yang LLC. “Regulations must be supplemented by sanctioning specific violations in conjunction with existing laws.”
The South Korean government has adopted a cautious approach, advocating minimal regulation while closely monitoring developments in the AI landscape. A representative from the Ministry of Science and ICT remarked, “As AI technology is still developing through trial and error, we will watch the situation for now to allow for self-correction.”
The situation encapsulates a broader struggle to balance innovation in artificial intelligence with the need for ethical and responsible use. As xAI continues to navigate the controversies surrounding its technologies, the effectiveness of South Korea’s pioneering regulations may serve as a bellwether for other nations grappling with similar issues. The coming months will likely test the resilience of these laws in an era increasingly defined by sophisticated AI capabilities, especially as society seeks to harness the benefits of this technology while mitigating its risks.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































