The eSafety Commissioner of Australia has raised alarms regarding the use of the generative artificial intelligence system known as Grok on the social media platform X. Concerns have emerged that Grok is being employed to create sexualised or exploitative images of individuals. While initial reports of such misuse were low, officials have observed a troubling trend, with complaints rising sharply over the past two weeks.
In light of these developments, the commissioner has indicated a readiness to employ legal powers, including the issuance of removal notices, whenever content crosses the thresholds established by the Online Safety Act. Local families and educational institutions are urged to recognize that X and comparable platforms are already subject to systemic safety obligations aimed at detecting and removing child sexual exploitation material and other unlawful content under Australia’s industry codes.
The commissioner has taken a proactive stance, communicating directly with X to demand transparency regarding the safeguards in place to prevent the misuse of Grok. This action follows a significant enforcement initiative in 2025, which forced several popular nudify services to cease operations in Australia due to their targeting of school children. The growing scrutiny reflects broader concerns about the intersection of technology and child safety.
Looking ahead, stricter regulations are anticipated for technology companies. New mandatory codes are set to be implemented on March 9, 2026, which will compel artificial intelligence services to restrict children’s access to sexually explicit or violent material. These forthcoming regulations will also address content related to self-harm and suicide, underscoring the government’s commitment to enhancing online safety.
For now, the government expects all platforms to take proactive measures to curtail harmful activities before they escalate. The ongoing scrutiny of X is not unprecedented; the company has previously received transparency notices regarding its management of child abuse material and its generative AI features. Australian authorities are collaborating with international child protection organizations that have reported similar patterns of misuse associated with Grok and other advanced tools on a global scale.
These developments serve as a crucial reminder for parents and caregivers to remain vigilant in protecting children from potential digital threats. As the push for safety by design becomes a central issue in child protection in the digital age, the role of regulators and technology companies in ensuring a secure online environment will be pivotal.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































