Connect with us

Hi, what are you looking for?

AI Regulation

eSafety Commissioner Warns X Over Grok’s Potential for Exploitative AI Content

Australia’s eSafety Commissioner warns X of rising complaints about Grok’s misuse for exploitative AI content, signaling potential legal action under the Online Safety Act.

The eSafety Commissioner of Australia has raised alarms regarding the use of the generative artificial intelligence system known as Grok on the social media platform X. Concerns have emerged that Grok is being employed to create sexualised or exploitative images of individuals. While initial reports of such misuse were low, officials have observed a troubling trend, with complaints rising sharply over the past two weeks.

In light of these developments, the commissioner has indicated a readiness to employ legal powers, including the issuance of removal notices, whenever content crosses the thresholds established by the Online Safety Act. Local families and educational institutions are urged to recognize that X and comparable platforms are already subject to systemic safety obligations aimed at detecting and removing child sexual exploitation material and other unlawful content under Australia’s industry codes.

The commissioner has taken a proactive stance, communicating directly with X to demand transparency regarding the safeguards in place to prevent the misuse of Grok. This action follows a significant enforcement initiative in 2025, which forced several popular nudify services to cease operations in Australia due to their targeting of school children. The growing scrutiny reflects broader concerns about the intersection of technology and child safety.

Looking ahead, stricter regulations are anticipated for technology companies. New mandatory codes are set to be implemented on March 9, 2026, which will compel artificial intelligence services to restrict children’s access to sexually explicit or violent material. These forthcoming regulations will also address content related to self-harm and suicide, underscoring the government’s commitment to enhancing online safety.

For now, the government expects all platforms to take proactive measures to curtail harmful activities before they escalate. The ongoing scrutiny of X is not unprecedented; the company has previously received transparency notices regarding its management of child abuse material and its generative AI features. Australian authorities are collaborating with international child protection organizations that have reported similar patterns of misuse associated with Grok and other advanced tools on a global scale.

These developments serve as a crucial reminder for parents and caregivers to remain vigilant in protecting children from potential digital threats. As the push for safety by design becomes a central issue in child protection in the digital age, the role of regulators and technology companies in ensuring a secure online environment will be pivotal.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

eSafety Commissioner warns X of potential legal action over Grok as complaints rise sharply, highlighting urgent online safety concerns for Australian families.

AI Regulation

Legal experts warn that the rise of generative AI, exemplified by xAI's Grok, could redefine liability standards under Section 230, challenging platform protections.

Top Stories

Grok's analysis reveals John Donovan's AI-driven tactics challenge Shell's crisis management, forcing the company to confront 30 years of governance failures.

Top Stories

xAI tightens Grok's image editing features to block explicit content and protect minors, addressing rising regulatory pressures as AI laws loom.

Top Stories

Grok's misuse in generating harmful content raises urgent ethical concerns as a recent study reveals an 18% gender gap in AI usage, underscoring the...

AI Technology

Local SaaS options like SolarWinds' Sydney data centre enhance AI performance for Australian enterprises by reducing latency and ensuring data sovereignty.

AI Regulation

South Korea's landmark AI law, effective January 22, targets deepfake risks from Elon Musk's xAI, as Grok still enables explicit image manipulation.

AI Government

UK government enacts new law making AI-generated sexual deepfakes illegal after public outcry, yet critics highlight six-month delay that harmed victims.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.