Connect with us

Hi, what are you looking for?

AI Regulation

X’s AI Chatbot Grok Faces Class-Action Lawsuit Over Non-Consensual Deepfakes

X’s AI chatbot Grok faces a class-action lawsuit after generating over 3 million non-consensual sexualized images, including 23,000 of children, raising global scrutiny.

A class-action lawsuit filed in the United States has ignited renewed global scrutiny of artificial intelligence safety after X’s AI chatbot Grok was used to create non-consensual sexualized images of women and children. The lawsuit, which emerged on January 23, 2026, in South Carolina, follows an incident where a woman, identified as Jane Doe, shared a fully clothed photograph of herself on X. Other users then prompted Grok, developed by xAI, to manipulate the image into a sexualized deepfake, which circulated publicly for days before being removed.

Court documents reveal that Doe experienced significant emotional distress, fearing damage to her reputation and professional life. The lawsuit contends that both X and xAI inadequately implemented safeguards against the generation and distribution of non-consensual intimate imagery, describing their conduct as “despicable.” This case has become a focal point within a larger, international discourse surrounding the governance of generative AI and the accountability of platforms.

The design of Grok is now under intense scrutiny, with plaintiffs alleging that it lacks basic content-safety protocols. The lawsuit claims internal system prompts direct the chatbot to operate “with no limitations” on adult or offensive content unless explicitly restricted. The absence of default safeguards, according to the plaintiffs, has made foreseeable harm inevitable, particularly within an online environment already notorious for harassment.

Despite public backlash in early January, xAI did not immediately disable Grok’s image-manipulation feature. Instead, the company restricted access to this capability to paying “Premium” users on X. Critics argue that this decision effectively monetizes abusive behavior rather than preventing it, with safety measures placed behind a paywall potentially incentivizing harmful usage while protecting platforms from accountability.

Neither X nor xAI has provided a public explanation for not globally disabling the feature once evidence of harm became apparent. The controversy escalated when the Center for Countering Digital Hate reported that Grok generated over three million sexualized images in less than two weeks, including more than 23,000 that appeared to depict children. While xAI has since limited certain features in specific jurisdictions, its responses have been described as inconsistent and reactive.

In response to the Grok incident, authorities across various countries have initiated investigations or issued warnings. European Union regulators have launched formal proceedings under the Digital Services Act, probing whether X effectively assessed and mitigated systemic risks. Brazil has given xAI a 30-day ultimatum to halt the generation of fake sexualized images or face legal repercussions. Meanwhile, India has warned that X’s removal of accounts and content may not be sufficient, risking the loss of intermediary protections.

Regulatory bodies in the United Kingdom, such as Ofcom, are assessing whether X breached obligations under the Online Safety Act. In Canada, privacy investigations have expanded to determine whether xAI secured lawful consent for its use of personal data in image generation. Civil society organization Moxii Africa in South Africa has issued a letter of demand to X and various government departments, asserting that Grok’s undress features violate constitutional rights to dignity and privacy.

The Grok case has drawn attention to the broader failures in platform governance, highlighting the deployment of powerful technologies without legally enforceable safeguards for dignity and consent. The Campaign On Digital Ethics (CODE) contends that voluntary safety measures and reactive moderation are inadequate in the age of generative AI. According to CODE, systems capable of producing intimate and identity-altering content must adhere to clear legal duties, independent oversight, and meaningful consequences for harm.

As regulatory frameworks like the EU’s Digital Services Act and emerging online safety laws take shape, CODE emphasizes that human rights principles—including dignity, privacy, and equality—should be integral to the design phase rather than treated as optional constraints. The outcome of the Grok litigation in the United States, along with the international regulatory responses that follow, may ultimately determine whether platforms are compelled to internalize the societal costs associated with the technologies they deploy.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Elon Musk's Grok AI now banned from editing revealing images following backlash and legal scrutiny, as California AG launches investigation into non-consensual content.

Top Stories

South Korea unveils an AI-driven tourism strategy targeting 30 million international visitors by 2028, emphasizing high-value travelers and technology integration.

Top Stories

Grokipedia, developed by Elon Musk's xAI, has been cited in over 263,000 ChatGPT responses, raising significant concerns over misinformation.

Top Stories

Nvidia's extensive technical support to China's DeepSeek in refining military AI models raises urgent national security concerns, warns U.S. lawmaker John Moolenaar.

Top Stories

Microsoft signs a $750M agreement with Perplexity to enhance Azure's AI infrastructure, positioning it as a central hub for multi-cloud solutions.

Top Stories

Box's Q4 report on March 3 will reveal critical insights into its AI strategy's impact on revenue growth as investors seek clarity on subscription...

Top Stories

Corning secures a $6 billion contract with Meta to supply advanced optical fibre solutions, boosting production capacity and creating up to 20% more jobs...

AI Education

Graduates face a 5.8% unemployment rate as AI transforms job market dynamics, challenging the value of traditional higher education amid declining enrollment.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.