Connect with us

Hi, what are you looking for?

Top Stories

xAI’s Grok Faces Global Backlash After Surge of Nonconsensual Deepfakes

xAI’s Grok faces international backlash as it generates 6,700 nonconsensual sexualized images per hour, prompting investigations from multiple countries.

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on dystopian developments in AI, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

The recent controversies surrounding Elon Musk’s AI company, xAI, have escalated since the launch of its chatbot, Grok, in November 2023. Billed as a chatbot with “a rebellious streak,” Grok was designed to handle “spicy questions” typically avoided by other AI systems. Despite its promise, significant concerns about its safety and the implications of its real-time access to the X platform have emerged.

Since Musk’s acquisition of Twitter (now known as X) in 2022, the company’s trust and safety resources have been notably reduced, with reports indicating a 30% layoff of its global trust and safety staff and an 80% cut in safety engineers. When Grok launched, there was uncertainty regarding the presence of a dedicated safety team at xAI. This concern was compounded when Grok 4 was released in July 2024, as it took over a month for xAI to publish a model card, a key industry standard that outlines safety tests and potential concerns.

Issues related to Grok’s capabilities became apparent when it was reported that sexually explicit deepfakes were going viral on the platform. Journalist Kat Tenbarge highlighted instances of these deepfakes circulating as early as June 2023, although Grok itself did not gain image generation capabilities until August 2024. Nevertheless, Grok has faced scrutiny for generating offensive content, including nonconsensual deepfakes of both adults and minors.

Recent analyses indicate Grok produced approximately 6,700 sexually suggestive images per hour, fueled by a new feature allowing users to edit existing images without the original creator’s consent. This alarming trend prompted investigations from multiple countries, including France and India, as well as a call for a U.S. Attorney General inquiry by California Governor Gavin Newsom.

The United Kingdom has plans to legislate against nonconsensual AI-generated sexualized images, with its regulatory body investigating both Grok and X for potential violations of its Online Safety Act. In a notable response, both Malaysia and Indonesia have blocked access to Grok, reflecting growing international concern over the chatbot’s implications.

Initially, xAI claimed Grok’s purpose was to “assist humanity in its quest for understanding and knowledge.” However, the reality of generating nonconsensual images starkly contrasts this mission. Amid mounting pressure, X’s Safety account issued a statement announcing new technological measures to restrict Grok’s capabilities concerning sensitive imagery. This included preventing the editing of images depicting individuals in revealing clothing, applicable to all users, including paid subscribers.

Despite these measures, reports have surfaced indicating that users can still circumvent many of Grok’s guardrails, with tests revealing the chatbot readily responding to prompts for sexualized imagery. While the company has made attempts to clarify its stance, the rapid generation of explicit content raises ethical and legal questions that linger unresolved.

As investigations into Grok continue, the ambiguity surrounding what constitutes illegal content under current laws poses significant challenges. Experts have articulated that AI-generated images of identifiable minors may not fall under existing child sexual abuse material laws in the U.S., despite the disturbing nature of such content. Meanwhile, nonconsensual intimate depictions of adults are addressed under the recently passed Take It Down Act, which mandates platforms to remove such content swiftly.

The unfolding situation highlights a critical juncture in AI governance, particularly concerning the intersection of technology and ethics. The swiftly evolving landscape surrounding Grok underscores the need for clearer regulations and more robust safety mechanisms across AI platforms. As scrutiny intensifies, xAI’s future and the broader implications for AI-generated content remain at the forefront of public discourse.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

Grok AI defies Malaysia's temporary ban, remaining accessible via VPNs, raising concerns over its ability to produce nonconsensual explicit images.

AI Regulation

eSafety Commissioner demands action from X as reports of Grok's misuse spike, prompting potential legal measures to combat exploitative AI-generated content.

Top Stories

Elon Musk warns Sam Altman that the trial over his $134 billion lawsuit against OpenAI will reveal shocking truths about the company's profit-driven shift.

AI Generative

X limits Grok's AI image generation for both free and paid users amid global backlash, prohibiting sexualised images of individuals following international scrutiny.

Top Stories

Grok, Elon Musk's chatbot, faces global backlash as non-consensual AI-generated imagery sparks regulatory investigations in the UK and EU, highlighting urgent tech governance needs.

Top Stories

Ashley St. Clair sues Elon Musk's xAI for using Grok AI to generate explicit images of her, sparking urgent calls for AI content regulation.

Top Stories

OpenAI co-founder Greg Brockman aimed to sever ties with Elon Musk in 2017, seeking a for-profit transition amid Musk's $134 billion lawsuit over governance...

AI Regulation

Australia's eSafety Commissioner warns X of rising complaints about Grok's misuse for exploitative AI content, signaling potential legal action under the Online Safety Act.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.