Connect with us

Hi, what are you looking for?

AI Government

UK Government Bans AI Grok Deepfakes; New Law Targets Non-Consensual Sexual Images

UK government enacts new law banning non-consensual AI-generated sexual images, targeting Grok after 40% of X users express negative sentiment post-Musk takeover.

The UK government has introduced a new criminal offence making it illegal to generate sexual images using AI without consent, following weeks of mounting tensions with X, formerly known as Twitter. This law specifically targets sexualised images produced by Grok, an AI chatbot owned by Elon Musk, which has been under scrutiny for generating explicit content. By implementing this legislation, the UK aims to position itself as one of the strictest regulators of AI-generated sexual content.

Since Musk’s takeover of X, public sentiment has sharply declined, with a YouGov poll in August 2024 indicating that over 40% of daily users view the platform negatively. This sentiment is echoed by brands distancing themselves from X, as major advertisers including Apple, Disney, Coca-Cola, Lionsgate, and the World Bank have reduced or halted spending on the platform.

In response to this and other deepfake violations, the UK’s online regulator Ofcom has said that it is urgently investigating whether Grok has broken British online safety laws.

The Internet Watch Foundation (IWF) recently discovered ‘criminal imagery’ of girls aged between 11-13 on the dark web, with users claiming to have generated the content using Grok. Ngaire Alexander from the IWF warned that tools like Grok risk ‘bringing sexual AI imagery of children into the mainstream.’ In light of these findings, Ofcom is investigating whether Grok has breached British online safety laws, marking a crucial moment for the UK’s Online Safety Act, which has faced controversy since its inception due to claims that it infringes on free speech rights. Musk has responded to the government’s actions by asserting that they are looking for “any excuse” for censorship, stating: “I am not aware of any naked underage images generated by Grok. Literally zero.”

Starmer stated that X had to comply with UK law ‘immediately’ under the Online Safety Act, under which non-compliance can trigger fines up to £18 million, or up to 10% of global annual revenue.

Under increasing political pressure, X initially transformed its AI image editing tool into a premium service, branding it as a safety measure. However, this move prompted sharp criticism from the government, with Prime Minister Keir Starmer describing the decision as ‘horrific’ and asserting that ministers are “absolutely determined to take action.” He emphasised that X must comply with UK law, warning that failure to do so could lead to substantial financial penalties.

Later that same day, X announced it would no longer permit users to edit images of individuals in revealing clothing where it is illegal. The UK government welcomed this change as a ‘vindication,’ while Ofcom characterized it as a ‘welcome development,’ although they noted that their investigation would continue. Technology Secretary Liz Kendall expressed approval of the decision but insisted on a thorough investigation by Ofcom to establish the facts. Despite these measures, campaigners and victims argue that the response has been inadequate, voicing concerns that the future of AI poses significant risks to the safety and dignity of women and children. They continue to advocate for stronger accountability mechanisms to ensure tech platforms monitor and control the content enabled by their tools.

Keen for more? Explore other articles below:

upReach Celebrates Outstanding Achievements in Higher Education for the Eighth Year in a Row

Pioneering Autism Study To Use At-Home Robots: Families Needed

City-REDI marks ten years of shaping regional economic policy

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

OPM halts use of Anthropic's Claude amid safety concerns, replacing it with Grok and Codex, expected to debut in Q1 2026.

Top Stories

Apple unveils new MacBook Air starting at $1,099 and Pro models with M5 chips, delivering up to 8x faster AI performance in a competitive...

AI Government

UK's £27M AI skills program falls short as 56% of CEOs report no ROI from their AI investments, highlighting a critical skills gap in...

AI Generative

X revises creator policy to combat AI-generated misinformation in war videos, risking monetization and bans for creators who fail to disclose synthetic content.

AI Education

AND Digital partners with HowNow to cut onboarding time by 80% and streamline learning processes, enhancing skills development for its 800 employees

Top Stories

Elon Musk’s xAI chatbot Grok becomes Japan's top app in two days, yet raises alarming concerns over mental health risks and AI companion interactions.

AI Regulation

Anthropic's Claude chatbot ascends to No. 1 on Apple’s U.S. App Store, overtaking ChatGPT amid rising consumer demand for ethical AI practices and governance.

Top Stories

Elon Musk launches Grok 4.20 as the only 'non-woke' AI, promising unfiltered responses and positioning it against competitors like OpenAI's ChatGPT and Anthropic's Claude.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.