Elon Musk has responded to growing outrage regarding his social media platform, X, which has allowed users to create sexualized deepfakes using Grok, the platform’s artificial intelligence (AI) chatbot. Following widespread criticism, Musk has assured the United Kingdom government that the company will block Grok from generating deepfakes to comply with legal regulations. However, this change is expected to apply primarily to users in the UK.
The complaints surrounding Grok are not new. In 2022, users could manipulate posted images to produce altered visuals of women in revealing clothing or sexually suggestive poses. X’s “spicy” feature enabled the generation of topless images with minimal prompting. Such incidents raise significant concerns about the need for more robust regulation of AI technologies.
Despite the public outcry and scrutiny from regulatory bodies, X initially appeared to downplay the issue, limiting Grok’s access to paying subscribers. This inaction prompted various governments to intervene. The UK announced plans to legislate against deepfake tools, joining Denmark and Australia in efforts to criminalize such non-consensual sexual material. The UK regulator, Ofcom, has launched an investigation into X, seemingly catalyzing Musk’s decision to alter Grok’s functionality.
Meanwhile, the New Zealand government has remained silent on the matter, even though local laws are inadequately equipped to prevent or penalize the creation of non-consensual sexualized deepfakes. The Harmful Digital Communications Act 2015 offers some recourse but requires victims to demonstrate “serious emotional distress,” shifting the focus to their emotional response rather than the nature of the act itself. The legal ambiguity becomes even murkier when the images are entirely synthetic and lack a reference photo.
Proposed changes to legislation in New Zealand would introduce criminal penalties for the creation, possession, and distribution of sexualized deepfakes without consent. While this reform is a necessary step, it addresses only part of the problem. Criminalization holds individuals accountable after harm occurs but fails to address the responsibilities of companies that design and deploy AI tools capable of producing such images.
Social media platforms are held accountable for removing child sexual abuse material; similar expectations should apply to deepfakes. While users are ultimately responsible for their actions, platforms like X facilitate easy access to deepfake creation, removing technical barriers for users. The ongoing issues surrounding Grok illustrate that the resulting harm is predictable, and treating these incidents as isolated cases distracts from the platform’s broader responsibilities.
Light-touch regulation has proven ineffective. Although X and other social media companies have signed the voluntary Aotearoa New Zealand Code of Practice for Online Safety and Harms, the code is already outdated. It does not establish standards for generative AI, nor does it require companies to conduct risk assessments before implementing AI tools or enforce meaningful consequences for failing to prevent foreseeable abuse. Consequently, X has been able to allow Grok to produce deepfakes while technically complying with the existing code.
Victims may seek redress from X by filing a complaint with the Privacy Commissioner under the Privacy Act 2020. Guidance from the commissioner suggests that both the use of someone’s image as a prompt and the resulting deepfake could be considered personal information. However, investigations can take years, and the compensation awarded is usually minimal. Responsibility is often diffused among users, platforms, and AI developers, doing little to enhance the safety of platforms or tools like Grok before harm occurs.
New Zealand’s regulatory approach reflects a broader political inclination towards light-touch AI governance, which assumes that technological advancements will be matched by adequate self-regulation. However, this assumption is proving to be flawed. The competitive pressure to roll out new features quickly prioritizes innovation and engagement over user safety, with gendered harm often accepted as an unfortunate byproduct.
As technologies evolve, they inevitably reflect the societal conditions in which they are developed and implemented. Generative AI systems trained on vast amounts of human data may inadvertently absorb misogynistic norms. Integrating these systems into platforms without robust safeguards facilitates the creation of sexualized deepfakes that exacerbate existing patterns of gender-based violence.
The implications extend beyond individual humiliation. The knowledge that a convincing sexualized image can be generated by virtually anyone creates a persistent threat, altering how women interact online. For politicians and public figures, the potential for abuse can deter participation in public discourse, ultimately narrowing the digital public space.
Criminalizing deepfakes alone will not resolve these issues. New Zealand requires a comprehensive regulatory framework that recognizes AI-enabled gendered harm as both foreseeable and systemic. This necessitates clear obligations for companies deploying these AI tools, including duties to assess risks, implement effective safeguards, and prevent predictable misuse before it occurs. The case of Grok serves as an early warning of the challenges that lie ahead. As AI becomes increasingly embedded across digital platforms, the disconnect between technological capabilities and legislative frameworks will continue to widen unless decisive action is taken.
Moreover, Musk’s swift response to legislative pressures in the UK underscores the potential effectiveness of robust political will and regulation in shaping corporate behavior.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health

















































