Campaigners are accusing the UK government of delaying the implementation of new legislation aimed at criminalizing the creation of non-consensual sexualized deepfakes. This criticism follows a recent backlash against images generated by Elon Musk’s AI tool, Grok, which has been used to digitally undress individuals without their consent. One woman reported that over 100 sexualized images of her have been created using the technology.
Currently, it is illegal to share deepfakes of adults in the UK; however, legislation passed in June 2025 that would make it a criminal offense to create or request these images has yet to be enacted. The specific application of this law to the images produced by Grok remains uncertain, prompting inquiries from the BBC to the government for clarification.
In a statement, X, the parent company of Grok, emphasized that anyone using the platform to generate illegal content would face consequences similar to those who upload illegal material. “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the statement noted.
Andrea Simon from the End Violence Against Women Coalition criticized the government for failing to enforce the law, expressing concern that this inaction places women and girls in jeopardy. “Non-consensual sexually explicit deepfakes are a clear violation of women’s rights and have a long-lasting, traumatic impact on victims,” she stated. Simon articulated that the threat of such abuse can compel women to self-censor, limiting their freedom of expression online.
On Tuesday, Technology Secretary Liz Kendall urged X to address the issue with urgency, labeling the situation “absolutely appalling.” The UK’s communications regulator, Ofcom, confirmed it has made “urgent contact” with both X and xAI, the company behind Grok, to investigate the matter further. Both Kendall and representatives from Downing Street have supported regulatory action, with the Prime Minister’s spokesperson indicating that “all options remain on the table.”
The Ministry of Justice reiterated that sharing intimate images without consent, including deepfakes, is already a criminal offense. They have introduced additional legislation to prohibit the creation of such material without consent. Under the existing law, it is illegal to generate pornographic deepfakes in contexts like revenge porn or involving children.
Professor Lorna Woods of Essex University explained that a provision in the Data (Use and Access) Act 2025 criminalizes the creation or commissioning of “purported intimate images.” However, campaigners and experts have noted that despite the government’s announcement regarding the legal crackdown last year, the crucial provisions enabling the prosecution of those requesting sexualized deepfakes have not yet been implemented.
Simon questioned the delay in enacting the necessary secondary legislation, pointing out that these deepfakes are a clear violation of women’s rights. Conservative peer Baroness Owen, who advocated for the legal change in the House of Lords, criticized the government’s sluggishness in enforcing the rules, stating, “We cannot afford any more delays. Survivors of this abuse deserve better.” Baroness Beeban Kidron, another cross-bench peer, emphasized that the rapid pace of technological advancement necessitates swift legislative action.
Women affected by deepfakes have come forward to share their experiences. Evie, one user, reported that after posting her images on X, she has become the target of at least 100 sexualized deepfakes generated by Grok. This overwhelming response has led her to refrain from reporting the images due to the emotional toll of revisiting them. “Knowing that all the people I care about in my life can see me like that… it’s disgusting,” she remarked.
Dr. Daisy Dixon, another affected user, expressed feelings of humiliation after seeing altered images using her profile picture. She critiqued Grok for automatically posting these altered images back to users, equating the act to a form of control and psychological assault. “We don’t want to dilute the concept, but it feels like a kind of assault on the body,” she stated.
As the discourse surrounding deepfakes continues to evolve, voices like Evie’s are calling for immediate action. She concluded, “There’s so many places online that you can do this, but the fact that it was happening on Twitter with the built-in AI bot – this is crazy this is allowed. Why is this allowed and why is nothing being done about it?”
See also
UK DWP Launches £23.4 Million AI Project to Streamline Benefits Claims Process
Federal Minister Launches AI Training Module for 150 Civil Servants at CSA
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake






















































