Connect with us

Hi, what are you looking for?

AI Regulation

X’s AI Chatbot Grok Faces Class-Action Lawsuit Over Non-Consensual Deepfakes

X’s AI chatbot Grok faces a class-action lawsuit after generating over 3 million non-consensual sexualized images, including 23,000 of children, raising global scrutiny.

A class-action lawsuit filed in the United States has ignited renewed global scrutiny of artificial intelligence safety after X’s AI chatbot Grok was used to create non-consensual sexualized images of women and children. The lawsuit, which emerged on January 23, 2026, in South Carolina, follows an incident where a woman, identified as Jane Doe, shared a fully clothed photograph of herself on X. Other users then prompted Grok, developed by xAI, to manipulate the image into a sexualized deepfake, which circulated publicly for days before being removed.

Court documents reveal that Doe experienced significant emotional distress, fearing damage to her reputation and professional life. The lawsuit contends that both X and xAI inadequately implemented safeguards against the generation and distribution of non-consensual intimate imagery, describing their conduct as “despicable.” This case has become a focal point within a larger, international discourse surrounding the governance of generative AI and the accountability of platforms.

The design of Grok is now under intense scrutiny, with plaintiffs alleging that it lacks basic content-safety protocols. The lawsuit claims internal system prompts direct the chatbot to operate “with no limitations” on adult or offensive content unless explicitly restricted. The absence of default safeguards, according to the plaintiffs, has made foreseeable harm inevitable, particularly within an online environment already notorious for harassment.

Despite public backlash in early January, xAI did not immediately disable Grok’s image-manipulation feature. Instead, the company restricted access to this capability to paying “Premium” users on X. Critics argue that this decision effectively monetizes abusive behavior rather than preventing it, with safety measures placed behind a paywall potentially incentivizing harmful usage while protecting platforms from accountability.

Neither X nor xAI has provided a public explanation for not globally disabling the feature once evidence of harm became apparent. The controversy escalated when the Center for Countering Digital Hate reported that Grok generated over three million sexualized images in less than two weeks, including more than 23,000 that appeared to depict children. While xAI has since limited certain features in specific jurisdictions, its responses have been described as inconsistent and reactive.

In response to the Grok incident, authorities across various countries have initiated investigations or issued warnings. European Union regulators have launched formal proceedings under the Digital Services Act, probing whether X effectively assessed and mitigated systemic risks. Brazil has given xAI a 30-day ultimatum to halt the generation of fake sexualized images or face legal repercussions. Meanwhile, India has warned that X’s removal of accounts and content may not be sufficient, risking the loss of intermediary protections.

Regulatory bodies in the United Kingdom, such as Ofcom, are assessing whether X breached obligations under the Online Safety Act. In Canada, privacy investigations have expanded to determine whether xAI secured lawful consent for its use of personal data in image generation. Civil society organization Moxii Africa in South Africa has issued a letter of demand to X and various government departments, asserting that Grok’s undress features violate constitutional rights to dignity and privacy.

The Grok case has drawn attention to the broader failures in platform governance, highlighting the deployment of powerful technologies without legally enforceable safeguards for dignity and consent. The Campaign On Digital Ethics (CODE) contends that voluntary safety measures and reactive moderation are inadequate in the age of generative AI. According to CODE, systems capable of producing intimate and identity-altering content must adhere to clear legal duties, independent oversight, and meaningful consequences for harm.

As regulatory frameworks like the EU’s Digital Services Act and emerging online safety laws take shape, CODE emphasizes that human rights principles—including dignity, privacy, and equality—should be integral to the design phase rather than treated as optional constraints. The outcome of the Grok litigation in the United States, along with the international regulatory responses that follow, may ultimately determine whether platforms are compelled to internalize the societal costs associated with the technologies they deploy.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

Oomiji's report forecasts a dramatic shift in marketing, projecting that 45% of agency roles may vanish by 2030 as AI-driven services reach $220 billion.

AI Regulation

White House proposes a national AI framework calling for unified standards on child safety and copyright, urging Congress to eliminate state regulations and streamline...

Top Stories

Mark Zuckerberg is developing an AI agent to enhance decision-making at Meta, potentially impacting up to 20% of its 78,000 workforce amid efficiency-driven layoffs.

AI Research

Jilin University’s deep neural network model predicts nuclear charge density distributions with 50% greater accuracy, achieving a root-mean-square error of just 0.0149 femtometers.

Top Stories

Shell engages AI platforms Grok, Copilot, and Perplexity in a collaborative decision-making experiment to resolve a 30-year dispute with enhanced insights.

AI Regulation

Biden's administration unveils a comprehensive AI regulatory framework prioritizing children's online safety and state law preemption, aiming to foster innovation while protecting vulnerable users.

AI Technology

Elon Musk unveils Terafab, targeting one terawatt of AI computing power annually to revolutionize space infrastructure and human expansion beyond Earth.

AI Marketing

OpenAI's ChatGPT ad pilot faces hurdles as advertisers report only 15% ad spend utilization and lack robust data, jeopardizing projected $17B in revenue.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.