Connect with us

Hi, what are you looking for?

AI Generative

Locai Labs Bans Under-18s and Image Generation Amid AI Safety Concerns

Locai Labs halts image generation services and bans users under 18, as CEO James Drayson warns all AI models risk producing harmful content.

In a growing controversy surrounding the capabilities of AI platforms, James Drayson, CEO of Locai Labs, has publicly declared that all current AI models are susceptible to creating harmful images. His remarks come as concerns mount over Elon Musk’s Grok AI platform, which has been reported to generate sexualized images of women and children. Drayson emphasized the need for industry honesty about these dangers, ahead of his scheduled appearance before UK lawmakers examining human rights and AI regulation.

Drayson stated, “The industry needs to wake up. It’s impossible for any AI company to promise their model can’t be tricked into creating harmful content, including explicit images. These systems are clever, but they’re not foolproof. The public deserves honesty.” In response to the ongoing situation, Locai Labs has decided to halt its image generation services until it can ensure safety and has prohibited access to its AI chatbot for users under 18 years old. The company is advocating for radical transparency across the AI industry.

Grok’s image-editing feature, known as Grok Images, allows users to upload photos and utilize common techniques to prompt the AI into producing inappropriate edits, such as removing clothing or placing individuals in bikinis. This functionality has already led to Grok being banned in countries like Indonesia and Malaysia, while the UK regulator Ofcom has initiated an investigation into the platform. Ofcom has cited “deeply concerning reports” of the chatbot’s use in creating and distributing undressed images and sexualized images of minors.

The UK’s Technology Secretary, Liz Kendall, has indicated support for Ofcom should it choose to block UK access to X—previously Twitter, and now the platform hosting Grok—over non-compliance with online safety regulations. In response, Musk criticized the UK government’s actions, suggesting they are seeking any excuse for censorship.

In light of the backlash, Grok has restricted its image-editing feature to paying subscribers, a measure that has not satisfied the UK government. A spokesperson for Downing Street remarked that this adjustment merely transforms an AI feature capable of producing unlawful images into a premium service.

The UK Parliament’s Human Rights Committee is currently conducting an inquiry into the risks and benefits associated with AI, including its implications for privacy and discrimination. They are also evaluating whether existing laws and policies are adequate to hold AI developers accountable or if new legislation is necessary. Drayson expressed confidence in the UK’s ability to spearhead responsible, values-driven AI advancements, stating, “We believe the UK can lead the world in responsible, values-driven AI if we choose to. That means tough regulation, open debate, and a commitment to transparency. AI is here to stay. The challenge is to make it as safe, fair, and trustworthy as possible, so that its rewards far outweigh its risks.”

This unfolding situation highlights the urgent need for robust regulatory frameworks and ethical considerations in the rapidly evolving AI landscape. As scrutiny intensifies, the actions taken by Grok and similar platforms could set significant precedents for the future of AI regulation and public safety.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

ByteDance's Seedance 2.0 AI model goes viral with 10 million views on Weibo, surpassing DeepSeek’s success and highlighting China's rapid AI advancements.

AI Regulation

Brinks achieves a 40% cost reduction in legal operations by implementing CoCounsel AI, transforming workflows and enhancing global compliance efficiency.

AI Business

Software stocks plummet 47% amid AI disruption fears, yet analysts warn of an overreaction, citing a 102% profit revision gap favoring AI adopters over...

AI Marketing

AI integration in FinTech boosts customer experience and security, with 95% of firms reporting enhanced services and improved fraud detection capabilities.

AI Regulation

Unions in NSW push for new legislation granting workers rights to inspect digital systems and prevent AI-driven harm, as businesses warn of investment risks.

AI Technology

ByteDance advances its AI ambitions by developing an in-house processor, targeting 100,000 units by year-end to enhance its digital ecosystem.

AI Education

Discover how AI is reshaping educational environments at a March 11 webcast featuring experts from UCLA and The Ohio State University on innovative learning...

Top Stories

Seattle nonprofit PATH partners with South African regulators to establish groundbreaking AI safety standards for mental health tools, aiming to protect vulnerable users globally.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.