Connect with us

Hi, what are you looking for?

AI Regulation

eSafety Commissioner Warns X Over Grok’s Potential for Exploitative AI Content

Australia’s eSafety Commissioner warns X of rising complaints about Grok’s misuse for exploitative AI content, signaling potential legal action under the Online Safety Act.

The eSafety Commissioner of Australia has raised alarms regarding the use of the generative artificial intelligence system known as Grok on the social media platform X. Concerns have emerged that Grok is being employed to create sexualised or exploitative images of individuals. While initial reports of such misuse were low, officials have observed a troubling trend, with complaints rising sharply over the past two weeks.

In light of these developments, the commissioner has indicated a readiness to employ legal powers, including the issuance of removal notices, whenever content crosses the thresholds established by the Online Safety Act. Local families and educational institutions are urged to recognize that X and comparable platforms are already subject to systemic safety obligations aimed at detecting and removing child sexual exploitation material and other unlawful content under Australia’s industry codes.

The commissioner has taken a proactive stance, communicating directly with X to demand transparency regarding the safeguards in place to prevent the misuse of Grok. This action follows a significant enforcement initiative in 2025, which forced several popular nudify services to cease operations in Australia due to their targeting of school children. The growing scrutiny reflects broader concerns about the intersection of technology and child safety.

Looking ahead, stricter regulations are anticipated for technology companies. New mandatory codes are set to be implemented on March 9, 2026, which will compel artificial intelligence services to restrict children’s access to sexually explicit or violent material. These forthcoming regulations will also address content related to self-harm and suicide, underscoring the government’s commitment to enhancing online safety.

For now, the government expects all platforms to take proactive measures to curtail harmful activities before they escalate. The ongoing scrutiny of X is not unprecedented; the company has previously received transparency notices regarding its management of child abuse material and its generative AI features. Australian authorities are collaborating with international child protection organizations that have reported similar patterns of misuse associated with Grok and other advanced tools on a global scale.

These developments serve as a crucial reminder for parents and caregivers to remain vigilant in protecting children from potential digital threats. As the push for safety by design becomes a central issue in child protection in the digital age, the role of regulators and technology companies in ensuring a secure online environment will be pivotal.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

US Senate authorizes staff to use only ChatGPT, Gemini, and Microsoft Copilot for official tasks, excluding Grok and Claude amid security concerns.

Top Stories

X restricts its popular 'Ask Grok' AI feature to Premium ($8/month) and Premium+ ($40/month) users, limiting free access to basic interactions only.

AI Regulation

ATO's Jeremy Hirschhorn reveals how AI enhances taxpayer compliance and dignity by enabling real-time alerts for self-audits, transforming regulatory engagement.

AI Regulation

Australia enforces New Age-Restricted Material Codes, imposing up to $49.5 million fines on companies failing to protect minors from explicit digital content.

Top Stories

X investigates Grok AI for generating hate-filled content, including racist remarks and false claims, amid heightened regulatory scrutiny on AI-generated outputs.

Top Stories

OpenAI's robotics lead Caitlin Kalinowski resigns amid ethical concerns over the company’s Pentagon partnership, highlighting risks of AI in national security.

Top Stories

Michael Burry urges Adobe to acquire Midjourney to fend off competitive threats as shares plunge 20%, highlighting the need for strategic action amidst an...

AI Regulation

Australia abandons its AI advisory body, prompting Toby Walsh to warn of heightened risks for youth amid inadequate regulatory frameworks for emerging technologies.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.