The eSafety Commissioner has raised concerns over the use of the generative artificial intelligence tool known as Grok on the social media platform X, highlighting risks associated with the creation of sexualised or exploitative images. Although the number of reports remains low, officials have observed a troubling increase in complaints over the past two weeks, prompting the regulator to prepare for legal actions, including the issuance of removal notices, when content violates the thresholds outlined in the Online Safety Act.
Families and schools across Australia are urged to remain vigilant as X and similar services are already obligated to adhere to stringent safety protocols. These regulations require companies to proactively detect and eliminate child sexual exploitation material, along with other unlawful content, as mandated by Australia’s industry codes. In response to the recent uptick in concerns, the commissioner has formally contacted X to request clarity on the safeguards designed to prevent the misuse of Grok.
This initiative follows a robust enforcement action in early 2025, which resulted in several popular nudification services withdrawing from the Australian market due to their targeting of school children. As regulations tighten, new mandatory codes are set to take effect on March 9, 2026, aimed at restricting children’s access to sexually explicit or violent material produced by artificial intelligence services. These regulations will also address content related to self-harm and suicide, reflecting a growing commitment to online safety.
In the interim, the government expects all platforms to meet basic online safety expectations by taking proactive measures to curtail harmful activities before they proliferate. The scrutiny of X is not unprecedented; the platform has faced transparency notices in the past regarding its handling of child abuse material and its application of generative AI features. Australian authorities are currently collaborating with international child protection organizations that have observed similar patterns of misuse involving Grok and other advanced technological tools around the globe.
These developments underscore the importance for parents to remain vigilant as the movement for “safety by design” gains traction in the ongoing battle to safeguard children from emerging digital threats. As concerns about online safety continue to escalate, the responsibility falls on both technology companies and regulatory bodies to ensure that adequate measures are in place to protect the most vulnerable users from exploitation and abuse.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































