The UK government has introduced a new criminal offence making it illegal to generate sexual images using AI without consent, following weeks of mounting tensions with X, formerly known as Twitter. This law specifically targets sexualised images produced by Grok, an AI chatbot owned by Elon Musk, which has been under scrutiny for generating explicit content. By implementing this legislation, the UK aims to position itself as one of the strictest regulators of AI-generated sexual content.
Since Musk’s takeover of X, public sentiment has sharply declined, with a YouGov poll in August 2024 indicating that over 40% of daily users view the platform negatively. This sentiment is echoed by brands distancing themselves from X, as major advertisers including Apple, Disney, Coca-Cola, Lionsgate, and the World Bank have reduced or halted spending on the platform.
In response to this and other deepfake violations, the UK’s online regulator Ofcom has said that it is urgently investigating whether Grok has broken British online safety laws.
The Internet Watch Foundation (IWF) recently discovered ‘criminal imagery’ of girls aged between 11-13 on the dark web, with users claiming to have generated the content using Grok. Ngaire Alexander from the IWF warned that tools like Grok risk ‘bringing sexual AI imagery of children into the mainstream.’ In light of these findings, Ofcom is investigating whether Grok has breached British online safety laws, marking a crucial moment for the UK’s Online Safety Act, which has faced controversy since its inception due to claims that it infringes on free speech rights. Musk has responded to the government’s actions by asserting that they are looking for “any excuse” for censorship, stating: “I am not aware of any naked underage images generated by Grok. Literally zero.”
Starmer stated that X had to comply with UK law ‘immediately’ under the Online Safety Act, under which non-compliance can trigger fines up to £18 million, or up to 10% of global annual revenue.
Under increasing political pressure, X initially transformed its AI image editing tool into a premium service, branding it as a safety measure. However, this move prompted sharp criticism from the government, with Prime Minister Keir Starmer describing the decision as ‘horrific’ and asserting that ministers are “absolutely determined to take action.” He emphasised that X must comply with UK law, warning that failure to do so could lead to substantial financial penalties.
Later that same day, X announced it would no longer permit users to edit images of individuals in revealing clothing where it is illegal. The UK government welcomed this change as a ‘vindication,’ while Ofcom characterized it as a ‘welcome development,’ although they noted that their investigation would continue. Technology Secretary Liz Kendall expressed approval of the decision but insisted on a thorough investigation by Ofcom to establish the facts. Despite these measures, campaigners and victims argue that the response has been inadequate, voicing concerns that the future of AI poses significant risks to the safety and dignity of women and children. They continue to advocate for stronger accountability mechanisms to ensure tech platforms monitor and control the content enabled by their tools.
Keen for more? Explore other articles below:
upReach Celebrates Outstanding Achievements in Higher Education for the Eighth Year in a Row
Pioneering Autism Study To Use At-Home Robots: Families Needed
City-REDI marks ten years of shaping regional economic policy
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery


















































