In a growing controversy surrounding the capabilities of AI platforms, James Drayson, CEO of Locai Labs, has publicly declared that all current AI models are susceptible to creating harmful images. His remarks come as concerns mount over Elon Musk’s Grok AI platform, which has been reported to generate sexualized images of women and children. Drayson emphasized the need for industry honesty about these dangers, ahead of his scheduled appearance before UK lawmakers examining human rights and AI regulation.
Drayson stated, “The industry needs to wake up. It’s impossible for any AI company to promise their model can’t be tricked into creating harmful content, including explicit images. These systems are clever, but they’re not foolproof. The public deserves honesty.” In response to the ongoing situation, Locai Labs has decided to halt its image generation services until it can ensure safety and has prohibited access to its AI chatbot for users under 18 years old. The company is advocating for radical transparency across the AI industry.
Grok’s image-editing feature, known as Grok Images, allows users to upload photos and utilize common techniques to prompt the AI into producing inappropriate edits, such as removing clothing or placing individuals in bikinis. This functionality has already led to Grok being banned in countries like Indonesia and Malaysia, while the UK regulator Ofcom has initiated an investigation into the platform. Ofcom has cited “deeply concerning reports” of the chatbot’s use in creating and distributing undressed images and sexualized images of minors.
The UK’s Technology Secretary, Liz Kendall, has indicated support for Ofcom should it choose to block UK access to X—previously Twitter, and now the platform hosting Grok—over non-compliance with online safety regulations. In response, Musk criticized the UK government’s actions, suggesting they are seeking any excuse for censorship.
In light of the backlash, Grok has restricted its image-editing feature to paying subscribers, a measure that has not satisfied the UK government. A spokesperson for Downing Street remarked that this adjustment merely transforms an AI feature capable of producing unlawful images into a premium service.
The UK Parliament’s Human Rights Committee is currently conducting an inquiry into the risks and benefits associated with AI, including its implications for privacy and discrimination. They are also evaluating whether existing laws and policies are adequate to hold AI developers accountable or if new legislation is necessary. Drayson expressed confidence in the UK’s ability to spearhead responsible, values-driven AI advancements, stating, “We believe the UK can lead the world in responsible, values-driven AI if we choose to. That means tough regulation, open debate, and a commitment to transparency. AI is here to stay. The challenge is to make it as safe, fair, and trustworthy as possible, so that its rewards far outweigh its risks.”
This unfolding situation highlights the urgent need for robust regulatory frameworks and ethical considerations in the rapidly evolving AI landscape. As scrutiny intensifies, the actions taken by Grok and similar platforms could set significant precedents for the future of AI regulation and public safety.
See also
New Mexico Advocates for AI Leadership to Mitigate High-Stakes Risks
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative


















































