Connect with us

Hi, what are you looking for?

AI Generative

Locai Labs Bans Under-18s and Image Generation Amid AI Safety Concerns

Locai Labs halts image generation services and bans users under 18, as CEO James Drayson warns all AI models risk producing harmful content.

In a growing controversy surrounding the capabilities of AI platforms, James Drayson, CEO of Locai Labs, has publicly declared that all current AI models are susceptible to creating harmful images. His remarks come as concerns mount over Elon Musk’s Grok AI platform, which has been reported to generate sexualized images of women and children. Drayson emphasized the need for industry honesty about these dangers, ahead of his scheduled appearance before UK lawmakers examining human rights and AI regulation.

Drayson stated, “The industry needs to wake up. It’s impossible for any AI company to promise their model can’t be tricked into creating harmful content, including explicit images. These systems are clever, but they’re not foolproof. The public deserves honesty.” In response to the ongoing situation, Locai Labs has decided to halt its image generation services until it can ensure safety and has prohibited access to its AI chatbot for users under 18 years old. The company is advocating for radical transparency across the AI industry.

Grok’s image-editing feature, known as Grok Images, allows users to upload photos and utilize common techniques to prompt the AI into producing inappropriate edits, such as removing clothing or placing individuals in bikinis. This functionality has already led to Grok being banned in countries like Indonesia and Malaysia, while the UK regulator Ofcom has initiated an investigation into the platform. Ofcom has cited “deeply concerning reports” of the chatbot’s use in creating and distributing undressed images and sexualized images of minors.

The UK’s Technology Secretary, Liz Kendall, has indicated support for Ofcom should it choose to block UK access to X—previously Twitter, and now the platform hosting Grok—over non-compliance with online safety regulations. In response, Musk criticized the UK government’s actions, suggesting they are seeking any excuse for censorship.

In light of the backlash, Grok has restricted its image-editing feature to paying subscribers, a measure that has not satisfied the UK government. A spokesperson for Downing Street remarked that this adjustment merely transforms an AI feature capable of producing unlawful images into a premium service.

The UK Parliament’s Human Rights Committee is currently conducting an inquiry into the risks and benefits associated with AI, including its implications for privacy and discrimination. They are also evaluating whether existing laws and policies are adequate to hold AI developers accountable or if new legislation is necessary. Drayson expressed confidence in the UK’s ability to spearhead responsible, values-driven AI advancements, stating, “We believe the UK can lead the world in responsible, values-driven AI if we choose to. That means tough regulation, open debate, and a commitment to transparency. AI is here to stay. The challenge is to make it as safe, fair, and trustworthy as possible, so that its rewards far outweigh its risks.”

This unfolding situation highlights the urgent need for robust regulatory frameworks and ethical considerations in the rapidly evolving AI landscape. As scrutiny intensifies, the actions taken by Grok and similar platforms could set significant precedents for the future of AI regulation and public safety.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

AI and advertising are poised for a $10 billion growth surge by 2026, driven by emerging trends reshaping the industry landscape.

AI Regulation

New York mandates AI companions to implement safety protocols for suicidal ideation detection, imposing fines up to $15,000 daily for non-compliance.

AI Business

Australia's AI sector achieved a historic $839 million in funding for 2025, heavily concentrated in four major companies, signaling a shift towards late-stage investments.

AI Technology

GitHub Copilot enhances AI-assisted coding with context engineering, enabling developers to implement custom instructions and reusable prompts for improved code quality and efficiency.

Top Stories

Cyber-enabled fraud has overtaken ransomware as the top corporate risk, with 73% of CEOs reporting its impact by 2025, highlighting urgent AI vulnerabilities.

AI Marketing

McDonald's invests in AI to enhance order accuracy to 95% with new tech initiatives, aiming to revolutionize customer interactions and operational efficiency.

AI Regulation

Africa's AI governance strategy targets a $1.5 trillion GDP boost by 2030, emphasizing ethical frameworks to ensure equitable growth and innovation.

Top Stories

Amazon sues Perplexity AI over its Comet shopping agent, alleging fraud and security risks that could disrupt the $500 billion digital advertising market.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.