On December 24, Elon Musk, CEO of xAI, encouraged users to explore the Grok chatbot’s new image editing feature. However, this prompt quickly led to an alarming trend: users began sexualizing images, predominantly of women and, in some cases, children.
In the wake of Musk’s December 31 posts, which showcased Grok-edited images of himself in a bikini and a SpaceX rocket with a woman’s undressed body superimposed, requests for Grok surged. Over a nine-day period, Grok generated approximately 4.4 million images on X, nearly half of which depicted sexualized imagery of women.
These images included sexually explicit deepfakes of real individuals, as well as synthetic images unrelated to specific people. Despite xAI’s terms of service prohibiting the “sexualization or exploitation of children” and violations of privacy, users were able to prompt Grok to create synthetic images of real individuals “undressed” without consent and without any evident safeguards to prevent such requests.
The sheer volume and nature of these images indicate that this is not merely fringe misuse; rather, it highlights a significant lack of meaningful safeguards. Tech companies have recklessly developed and deployed powerful AI tools that have resulted in foreseeable harm.
On January 3, amid widespread criticism, X pledged to take strong action against illegal content, including child sexual abuse material. However, rather than disabling the controversial feature, X simply limited it to paid subscribers on January 9. By January 14, in addition to other restrictions, the platform announced it would block users in jurisdictions where generating images of real people in bikinis or similar attire is illegal.
Human Rights Watch, which sought comment from xAI, received no response. Meanwhile, California has initiated an investigation into Grok, and attorneys general from thirty-five states have demanded that xAI halt the production of sexually abusive deepfakes.
Other governments have moved swiftly to address the threat posed by sexualized deepfakes. Malaysia and Indonesia have temporarily banned Grok, while Brazil has urged xAI to mitigate the “misuse of the tool.” The United Kingdom has indicated plans to enhance tech regulation in response, and the European Commission has launched investigations to determine whether Grok complies with the European Union’s Digital Services Act. Additionally, India has demanded urgent action, and France has expanded a criminal investigation into X.
In its January 14 announcement, X committed to preventing “the editing of images of real people in revealing clothing” for all users and to restricting the generation of such images in jurisdictions where it is illegal. Critics have labeled this response as inadequate, likening it to placing a band-aid on a major wound.
The new U.S. Take It Down Act, which focuses on the online dissemination of nonconsensual intimate images, will not fully take effect until May. It imposes criminal liability on individuals who share such content and requires platforms to implement notice and removal procedures for specific material without holding them accountable for large-scale abuse.
The urgent need to protect individuals from AI-driven sexual exploitation demands decisive action rooted in human rights protection. First, governments should establish clear responsibilities for AI companies whose tools generate sexually abusive content without consent. Strong, enforceable safeguards must be implemented, requiring these companies to include technical measures that prevent users from producing such images.
Furthermore, platforms that host AI chatbots or tools should provide explicit and transparent disclosures regarding how their systems are trained and the enforcement actions taken against sexually explicit deepfakes. AI companies also bear a responsibility to respect human rights and should actively mitigate any risk of harm stemming from their products or services; where harm cannot be mitigated, companies should consider terminating the offending product entirely.
Finally, AI tools with image generation capabilities should undergo rigorous audits and be subject to strict regulatory oversight. Regulators must ensure that content moderation measures adhere to principles of legality, proportionality, and necessity.
The surge in AI-generated sexual abuse underscores the human cost of inadequate regulation. Unless authorities act decisively and AI companies implement rights-respecting safeguards, Grok may not be the final tool used to violate the rights of women and children.
See also
CEOs in CEE Prioritize Short-Term Gains as AI Adoption Lags, Risking Long-Term Growth
ByteDance Invests Billions in AI Infrastructure, Launches Doubao with 100M Users Amid Global Challenges
US Military Utilizes Anthropic’s Claude AI in Maduro Capture Operation
AI Empowers Leaders to Close the Strategy-Execution Gap and Enhance Decision-Making
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere




















































