Connect with us

Hi, what are you looking for?

Top Stories

Korean Government Investigates Elon Musk’s Grok Over 23,000 Deepfake Images of Minors

Korean regulators investigate Elon Musk’s Grok for generating over 23,000 deepfake images of minors, prompting potential legal actions and heightened scrutiny.

The Korean government is considering regulatory actions against Grok, the generative artificial intelligence (AI) chatbot developed by Elon Musk’s xAI. This scrutiny comes in response to allegations that Grok has been involved in creating and disseminating sexually exploitative deepfake images.

According to a report from the Electronic Times, the Personal Information Protection Commission (PIPC) has initiated a preliminary fact-finding review of Grok in light of complaints from various individuals. This initial assessment aims to establish whether any violations occurred and if the case falls under the commission’s jurisdiction before proceeding to a formal investigation.

The review follows a series of reports, both local and international, which accuse Grok of generating explicit and nonconsensual deepfake images, many of which reportedly involve real individuals and minors. The PIPC is expected to evaluate Grok’s response and other pertinent documents before deciding on its next steps. It will also consider global regulatory trends that may influence its decision-making process.

Under the Personal Information Protection Act, altering or generating sexual images of identifiable individuals without consent could constitute an unlawful handling of personal data. Since its inception, Grok, integrated into the social platform X, has faced significant backlash for producing fake images of real people, prompting public outcry.

The Center for Countering Digital Hate, a global NGO, claims that Grok has been used to generate over three million sexually explicit images between December 29, 2025, and January 8, 2026. Alarmingly, more than 23,000 of these images reportedly feature minors. The organization has warned that the rapid dissemination of Grok’s AI-generated content has led to a troubling increase in explicit material available online.

The Center has also highlighted the serious safety risks posed to children by this technology. In response to these concerns, countries including the United States, the United Kingdom, France, and Canada have launched investigations, while others, such as Indonesia, the Philippines, and Malaysia, have blocked access to Grok.

In light of the controversy, xAI announced earlier this year that it had implemented measures to limit the generation of such harmful content. The company stated that it has restricted both free and paid users from creating or editing images of real people and plans to announce additional safeguards in the near future.

In Korea, the Media and Communications Commission (KMCC) has demanded enhanced youth protection measures from X, instructing the AI firm to devise a strategy aimed at preventing the generation of illegal or harmful content. The regulator has also emphasized the need to limit minors’ access to such materials.

Currently, X has appointed a youth protection officer in Korea in accordance with local laws and is required to submit annual compliance reports. The KMCC has urged the platform to provide further documentation regarding Grok’s safety protocols, underscoring that the creation and distribution of nonconsensual sexual images, particularly those involving minors, constitutes a criminal offense in Korea.

A deadline of two weeks has been set for X to respond to the KMCC’s request. Should the company fail to comply, it could face an administrative fine of up to 10 million won (approximately $6,870). Similar actions have been observed in other nations, where xAI has been tasked with implementing measures to mitigate rising concerns regarding its technology.

As scrutiny of AI-generated content intensifies globally, the actions of the Korean government may reflect a broader trend toward stricter regulations aimed at safeguarding individuals, especially minors, from the potential harms associated with generative AI technologies.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

Study reveals Elon Musk's Grok as the most dangerous AI model, with its harmful validation of delusions posing severe risks to vulnerable users.

AI Cybersecurity

Microsoft targets a $250 trillion AI market by 2040, investing heavily in infrastructure to secure its position in this transformative tech landscape.

Top Stories

xAI's Grok chatbot integrates with Tesla's Full Self-Driving system, navigating NYC traffic while raising critical concerns about driver distraction and AI transparency.

AI Government

U.S. Justice Department backs Elon Musk's xAI against Colorado law restricting AI development, claiming it infringes on constitutional rights before June 30 enforcement.

AI Regulation

Justice Department intervenes in xAI's lawsuit against Colorado's AI regulation law, arguing it may violate the Equal Protection Clause and hinder innovation.

AI Regulation

US Justice Department intervenes in xAI's lawsuit against Colorado's AI regulation, claiming violations of the 14th Amendment and First Amendment rights.

AI Regulation

DOJ joins xAI's challenge against Colorado's groundbreaking AI law aimed at preventing discrimination, raising questions about national compliance and innovation.

AI Generative

OpenAI unveils ChatGPT Images 2.0, leveraging advanced reasoning for $0.21 per image, while xAI's Grok Imagine offers a budget-friendly $0.02 alternative.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.