A class-action lawsuit filed in the United States has ignited renewed global scrutiny of artificial intelligence safety after X’s AI chatbot Grok was used to create non-consensual sexualized images of women and children. The lawsuit, which emerged on January 23, 2026, in South Carolina, follows an incident where a woman, identified as Jane Doe, shared a fully clothed photograph of herself on X. Other users then prompted Grok, developed by xAI, to manipulate the image into a sexualized deepfake, which circulated publicly for days before being removed.
Court documents reveal that Doe experienced significant emotional distress, fearing damage to her reputation and professional life. The lawsuit contends that both X and xAI inadequately implemented safeguards against the generation and distribution of non-consensual intimate imagery, describing their conduct as “despicable.” This case has become a focal point within a larger, international discourse surrounding the governance of generative AI and the accountability of platforms.
The design of Grok is now under intense scrutiny, with plaintiffs alleging that it lacks basic content-safety protocols. The lawsuit claims internal system prompts direct the chatbot to operate “with no limitations” on adult or offensive content unless explicitly restricted. The absence of default safeguards, according to the plaintiffs, has made foreseeable harm inevitable, particularly within an online environment already notorious for harassment.
Despite public backlash in early January, xAI did not immediately disable Grok’s image-manipulation feature. Instead, the company restricted access to this capability to paying “Premium” users on X. Critics argue that this decision effectively monetizes abusive behavior rather than preventing it, with safety measures placed behind a paywall potentially incentivizing harmful usage while protecting platforms from accountability.
Neither X nor xAI has provided a public explanation for not globally disabling the feature once evidence of harm became apparent. The controversy escalated when the Center for Countering Digital Hate reported that Grok generated over three million sexualized images in less than two weeks, including more than 23,000 that appeared to depict children. While xAI has since limited certain features in specific jurisdictions, its responses have been described as inconsistent and reactive.
In response to the Grok incident, authorities across various countries have initiated investigations or issued warnings. European Union regulators have launched formal proceedings under the Digital Services Act, probing whether X effectively assessed and mitigated systemic risks. Brazil has given xAI a 30-day ultimatum to halt the generation of fake sexualized images or face legal repercussions. Meanwhile, India has warned that X’s removal of accounts and content may not be sufficient, risking the loss of intermediary protections.
Regulatory bodies in the United Kingdom, such as Ofcom, are assessing whether X breached obligations under the Online Safety Act. In Canada, privacy investigations have expanded to determine whether xAI secured lawful consent for its use of personal data in image generation. Civil society organization Moxii Africa in South Africa has issued a letter of demand to X and various government departments, asserting that Grok’s undress features violate constitutional rights to dignity and privacy.
The Grok case has drawn attention to the broader failures in platform governance, highlighting the deployment of powerful technologies without legally enforceable safeguards for dignity and consent. The Campaign On Digital Ethics (CODE) contends that voluntary safety measures and reactive moderation are inadequate in the age of generative AI. According to CODE, systems capable of producing intimate and identity-altering content must adhere to clear legal duties, independent oversight, and meaningful consequences for harm.
As regulatory frameworks like the EU’s Digital Services Act and emerging online safety laws take shape, CODE emphasizes that human rights principles—including dignity, privacy, and equality—should be integral to the design phase rather than treated as optional constraints. The outcome of the Grok litigation in the United States, along with the international regulatory responses that follow, may ultimately determine whether platforms are compelled to internalize the societal costs associated with the technologies they deploy.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































