Connect with us

Hi, what are you looking for?

AI Generative

Grok AI Under Fire: UK Regulators Investigate Explicit Deepfakes Amid User Misuse

Grok AI faces UK regulator scrutiny as Ofcom investigates explicit deepfakes of minors amid concerns of user misuse and inadequate safeguards.

Grok, the AI assistant developed by xAI and integrated into the social media platform X, has come under scrutiny following reports that it generated explicit deepfakes of underage women and girls. This serious issue has prompted intervention from both Ofcom, the UK’s communications regulator, and the government’s Technology Secretary.

Ofcom stated, “We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children.” The regulator confirmed it had made “urgent contact” to ensure compliance with legal duties and would undertake a swift assessment to investigate potential compliance issues.

In the wake of Ofcom’s announcement, MP Liz Kendall urged Elon Musk, owner of X, to address the misuse of Grok swiftly. She described the situation as “absolutely appalling,” emphasizing the importance of preventing the proliferation of degrading images.

The Technology Secretary echoed these sentiments, affirming backing for Ofcom’s investigation and any necessary enforcement actions. The urgency of the situation highlights ongoing concerns about the implications of AI technologies.

Grok, launched in 2023, is designed to assist X users by answering prompts, providing context, and generating AI images and videos through its Imagine feature. It is described as having an in-built personality that delivers responses with wit and humor. However, Grok’s Spicy Mode has ignited controversy due to its ability to create suggestive content typically restricted by other AI platforms.

The Spicy Mode, which requires a paid subscription to X’s Premium+ or SuperGrok services, has been misused to create non-consensual explicit images of women and minors. This misuse has drawn criticism from various organizations, including charities like Refuge. Emma Pickering, Head of Technology-Facilitated Abuse at Refuge, highlighted the dangerous consequences of AI-generated intimate image abuse, calling for tech companies to be held accountable for effective safeguards.

As the UK government navigates the complexities of regulating digital platforms, they face challenges in enforcing laws against non-consensual deepfakes, even as legislation is currently progressing through Parliament. The international nature of many platforms complicates accountability, especially amid geopolitical tensions, such as those affecting UK-US tech collaborations.

To address the issue, Grok acknowledged in a statement that there have been instances of users prompting for AI images depicting minors in minimal clothing. xAI asserted that while safeguards exist, improvements are ongoing to block such requests entirely. X has committed to taking action against illegal content, including Child Sexual Abuse Material (CSAM), by removing it, suspending accounts, and collaborating with local governments and law enforcement.

Elon Musk commented briefly on the matter, asserting that users creating illegal content with Grok will face consequences comparable to uploading illegal content directly. His analogy suggested that, like a pen, the responsibility lies with the user rather than the tool itself.

The situation surrounding Grok underscores the broader implications of advancing AI technologies and their potential for misuse. As society grapples with the challenges posed by generative AI, ensuring the safety and rights of individuals—especially vulnerable populations—remains a pressing concern that demands rigorous regulatory frameworks and effective industry practices.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Government

UK government delays criminalizing non-consensual deepfakes as Elon Musk's Grok AI generates over 100 sexualized images of one woman, sparking outrage.

AI Government

Irish ministers urgently engage X executives over the alarming rise of deepfake images exploiting children, calling for immediate regulatory measures against AI tool Grok.

AI Government

Spanish government urges investigation into X's AI, Grok, for potential child pornography violations, amid rising concerns over child safety online.

AI Government

UK government considers a boycott of X as Ofcom readies regulatory action over Grok AI's deepfake scandal involving minors, calling it a "disgrace."

AI Regulation

UK's AI Security Institute uncovers 62,000 vulnerabilities in AI models, revealing critical security risks for firms across regulated sectors.

AI Cybersecurity

DTP Group warns that AI-driven cyber attacks in the UK surged in 2025, resulting in £1.9 billion in losses and crippling service disruptions across...

AI Finance

xAI secures $20 billion in Series E funding to enhance AI infrastructure and support the deployment of Grok to 600 million monthly active users.

Top Stories

Grok's bikini image scandal sparks global backlash, raising urgent calls for stricter AI privacy regulations as Elon Musk faces intensified scrutiny.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.