Connect with us

Hi, what are you looking for?

AI Regulation

Musk Promises to Block Sexualised Deepfakes on X, Highlighting Regulatory Gaps in NZ

Elon Musk vows to block sexualized deepfakes on X after UK regulatory pressure, spotlighting gaps in New Zealand’s AI governance and accountability.

Elon Musk has responded to growing outrage regarding his social media platform, X, which has allowed users to create sexualized deepfakes using Grok, the platform’s artificial intelligence (AI) chatbot. Following widespread criticism, Musk has assured the United Kingdom government that the company will block Grok from generating deepfakes to comply with legal regulations. However, this change is expected to apply primarily to users in the UK.

The complaints surrounding Grok are not new. In 2022, users could manipulate posted images to produce altered visuals of women in revealing clothing or sexually suggestive poses. X’s “spicy” feature enabled the generation of topless images with minimal prompting. Such incidents raise significant concerns about the need for more robust regulation of AI technologies.

Despite the public outcry and scrutiny from regulatory bodies, X initially appeared to downplay the issue, limiting Grok’s access to paying subscribers. This inaction prompted various governments to intervene. The UK announced plans to legislate against deepfake tools, joining Denmark and Australia in efforts to criminalize such non-consensual sexual material. The UK regulator, Ofcom, has launched an investigation into X, seemingly catalyzing Musk’s decision to alter Grok’s functionality.

Meanwhile, the New Zealand government has remained silent on the matter, even though local laws are inadequately equipped to prevent or penalize the creation of non-consensual sexualized deepfakes. The Harmful Digital Communications Act 2015 offers some recourse but requires victims to demonstrate “serious emotional distress,” shifting the focus to their emotional response rather than the nature of the act itself. The legal ambiguity becomes even murkier when the images are entirely synthetic and lack a reference photo.

Proposed changes to legislation in New Zealand would introduce criminal penalties for the creation, possession, and distribution of sexualized deepfakes without consent. While this reform is a necessary step, it addresses only part of the problem. Criminalization holds individuals accountable after harm occurs but fails to address the responsibilities of companies that design and deploy AI tools capable of producing such images.

Social media platforms are held accountable for removing child sexual abuse material; similar expectations should apply to deepfakes. While users are ultimately responsible for their actions, platforms like X facilitate easy access to deepfake creation, removing technical barriers for users. The ongoing issues surrounding Grok illustrate that the resulting harm is predictable, and treating these incidents as isolated cases distracts from the platform’s broader responsibilities.

Light-touch regulation has proven ineffective. Although X and other social media companies have signed the voluntary Aotearoa New Zealand Code of Practice for Online Safety and Harms, the code is already outdated. It does not establish standards for generative AI, nor does it require companies to conduct risk assessments before implementing AI tools or enforce meaningful consequences for failing to prevent foreseeable abuse. Consequently, X has been able to allow Grok to produce deepfakes while technically complying with the existing code.

Victims may seek redress from X by filing a complaint with the Privacy Commissioner under the Privacy Act 2020. Guidance from the commissioner suggests that both the use of someone’s image as a prompt and the resulting deepfake could be considered personal information. However, investigations can take years, and the compensation awarded is usually minimal. Responsibility is often diffused among users, platforms, and AI developers, doing little to enhance the safety of platforms or tools like Grok before harm occurs.

New Zealand’s regulatory approach reflects a broader political inclination towards light-touch AI governance, which assumes that technological advancements will be matched by adequate self-regulation. However, this assumption is proving to be flawed. The competitive pressure to roll out new features quickly prioritizes innovation and engagement over user safety, with gendered harm often accepted as an unfortunate byproduct.

As technologies evolve, they inevitably reflect the societal conditions in which they are developed and implemented. Generative AI systems trained on vast amounts of human data may inadvertently absorb misogynistic norms. Integrating these systems into platforms without robust safeguards facilitates the creation of sexualized deepfakes that exacerbate existing patterns of gender-based violence.

The implications extend beyond individual humiliation. The knowledge that a convincing sexualized image can be generated by virtually anyone creates a persistent threat, altering how women interact online. For politicians and public figures, the potential for abuse can deter participation in public discourse, ultimately narrowing the digital public space.

Criminalizing deepfakes alone will not resolve these issues. New Zealand requires a comprehensive regulatory framework that recognizes AI-enabled gendered harm as both foreseeable and systemic. This necessitates clear obligations for companies deploying these AI tools, including duties to assess risks, implement effective safeguards, and prevent predictable misuse before it occurs. The case of Grok serves as an early warning of the challenges that lie ahead. As AI becomes increasingly embedded across digital platforms, the disconnect between technological capabilities and legislative frameworks will continue to widen unless decisive action is taken.

Moreover, Musk’s swift response to legislative pressures in the UK underscores the potential effectiveness of robust political will and regulation in shaping corporate behavior.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Elon Musk's Grok faces global backlash over explicit deepfake images, prompting Australia to amend laws and leading to user-generated content soaring to 6,700 altered...

AI Government

Elon Musk's xAI faces backlash as Grok generates tens of thousands of inappropriate images, prompting global investigations and calls for AI's moral framework.

AI Technology

Tesla revives its Dojo AI supercomputer project with advancements in the nearly completed AI5 chip, targeting a $16.5B deal with Samsung for AI6 production

Top Stories

1min.AI offers a lifetime Advanced Business Plan for $74.97, down from $540, enabling users to access multiple AI models seamlessly and boost productivity.

AI Technology

Andrej Karpathy critiques Nvidia's Jensen Huang, revealing AI code's productivity drop by 19%, as engineers struggle with complex projects and dependency on tools.

Top Stories

Elon Musk announces a bold nine-month chip design cycle for Tesla's AI processors, aiming to rival Nvidia and AMD with mass deployment across millions...

Top Stories

xAI's Grok faces international backlash as it generates 6,700 nonconsensual sexualized images per hour, prompting investigations from multiple countries.

AI Tools

Grok AI defies Malaysia's temporary ban, remaining accessible via VPNs, raising concerns over its ability to produce nonconsensual explicit images.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.