Connect with us

Hi, what are you looking for?

AI Regulation

eSafety Commissioner Urges X to Address Grok’s Potential Abuse Amid AI Safety Concerns

eSafety Commissioner warns X of potential legal action over Grok as complaints rise sharply, highlighting urgent online safety concerns for Australian families.

The eSafety Commissioner has raised concerns over the use of the generative artificial intelligence tool known as Grok on the social media platform X, highlighting risks associated with the creation of sexualised or exploitative images. Although the number of reports remains low, officials have observed a troubling increase in complaints over the past two weeks, prompting the regulator to prepare for legal actions, including the issuance of removal notices, when content violates the thresholds outlined in the Online Safety Act.

Families and schools across Australia are urged to remain vigilant as X and similar services are already obligated to adhere to stringent safety protocols. These regulations require companies to proactively detect and eliminate child sexual exploitation material, along with other unlawful content, as mandated by Australia’s industry codes. In response to the recent uptick in concerns, the commissioner has formally contacted X to request clarity on the safeguards designed to prevent the misuse of Grok.

This initiative follows a robust enforcement action in early 2025, which resulted in several popular nudification services withdrawing from the Australian market due to their targeting of school children. As regulations tighten, new mandatory codes are set to take effect on March 9, 2026, aimed at restricting children’s access to sexually explicit or violent material produced by artificial intelligence services. These regulations will also address content related to self-harm and suicide, reflecting a growing commitment to online safety.

In the interim, the government expects all platforms to meet basic online safety expectations by taking proactive measures to curtail harmful activities before they proliferate. The scrutiny of X is not unprecedented; the platform has faced transparency notices in the past regarding its handling of child abuse material and its application of generative AI features. Australian authorities are currently collaborating with international child protection organizations that have observed similar patterns of misuse involving Grok and other advanced technological tools around the globe.

These developments underscore the importance for parents to remain vigilant as the movement for “safety by design” gains traction in the ongoing battle to safeguard children from emerging digital threats. As concerns about online safety continue to escalate, the responsibility falls on both technology companies and regulatory bodies to ensure that adequate measures are in place to protect the most vulnerable users from exploitation and abuse.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

Legal experts warn that the rise of generative AI, exemplified by xAI's Grok, could redefine liability standards under Section 230, challenging platform protections.

Top Stories

Grok's analysis reveals John Donovan's AI-driven tactics challenge Shell's crisis management, forcing the company to confront 30 years of governance failures.

Top Stories

xAI tightens Grok's image editing features to block explicit content and protect minors, addressing rising regulatory pressures as AI laws loom.

Top Stories

Grok's misuse in generating harmful content raises urgent ethical concerns as a recent study reveals an 18% gender gap in AI usage, underscoring the...

AI Technology

Local SaaS options like SolarWinds' Sydney data centre enhance AI performance for Australian enterprises by reducing latency and ensuring data sovereignty.

AI Regulation

South Korea's landmark AI law, effective January 22, targets deepfake risks from Elon Musk's xAI, as Grok still enables explicit image manipulation.

AI Government

UK government enacts new law making AI-generated sexual deepfakes illegal after public outcry, yet critics highlight six-month delay that harmed victims.

AI Government

Ireland's government to fast-track fines for tech firms misusing AI following backlash against Elon Musk's Grok bot, which allowed image manipulation of minors.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.