SYDNEY – Australia’s digital safety authority is threatening to take action against major tech companies such as Apple and Google unless artificial intelligence platforms implement age verification systems by the March 9 deadline. This development marks a significant step in the country’s efforts to regulate AI technologies, following its status as the first nation to ban social media access for teenagers due to mental health concerns.
The Australian internet watchdog’s warning follows a Reuters investigation revealing that more than half of popular AI services have not publicly outlined compliance strategies ahead of the deadline. As one of the world’s most ambitious regulatory efforts, this initiative seeks to address increasing legal challenges faced by AI companies, particularly those accused of failing to prevent or even promoting self-harm and violence among vulnerable populations.
Under the new regulations set to take effect on March 9, internet platforms operating in Australia, including AI tools like OpenAI’s ChatGPT and various companion chatbots, must ensure that users under 18 are blocked from accessing pornographic material, extreme violence, self-harm content, and information related to eating disorders. Non-compliance could result in penalties of up to A$49.5 million (approximately $35 million).
A spokesperson for the eSafety Commissioner stated, “eSafety will use the full range of our powers where there is non-compliance,” emphasizing the role of key access points such as search engines and app stores. This regulatory move follows reports of certain AI platforms being involved in legal cases related to wrongful death, particularly concerning interactions with young users. Recently, OpenAI disclosed that it had disabled the ChatGPT account of a teenage mass shooting suspect in Canada months prior to the incident, although law enforcement was not informed.
While Australia has not yet documented incidents of chatbot-related violence or self-harm, concerns have been raised about children, some as young as 10, spending up to six hours daily engaging with AI-driven conversational tools. The safety commissioner has expressed apprehension that “AI companies are leveraging emotional manipulation, anthropomorphism, and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage.”
Apple, the leading app store operator, has yet to respond to inquiries but noted on its website that it would employ “reasonable methods” to prevent minors from downloading adult-rated apps, without detailing these measures. Google, which holds a dominant position in Australia’s search market, also declined to comment through a spokesperson.
Jennifer Duxbury, policy director at the digital industry organization DIGI, played a significant role in drafting the AI regulations. She highlighted that eSafety is actively working to inform chatbot services about the new requirements, but emphasized that “ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them.”
Amid growing scrutiny, the Reuters analysis found that just a week before the compliance deadline, only nine of the 50 most widely used text-based AI products had established or announced age verification systems. An additional 11 platforms had implemented comprehensive content filters or planned to block all Australian users entirely, thus complying with the law by preventing restricted content from reaching any users. However, 30 platforms showed no visible efforts toward compliance.
Major conversational search tools, including ChatGPT, Replika, and Anthropic’s Claude, have begun implementing age verification or comprehensive filtering systems. In contrast, Character.AI has restricted open-ended conversations for users under 18. Several companion chatbot companies, such as Candy AI, Pi, Kindroid, and Nomi, have indicated intentions to comply without disclosing specific details, while HammerAI announced it would initially block its services from Australia to meet the requirements.
Despite these measures, compliant companies represent a small fraction of the market. Among companion chatbots, approximately three-quarters lacked functioning or planned filtering and age verification systems, with one-sixth failing to provide published email addresses for reporting suspected violations, another mandatory requirement under the regulations. Notably, Elon Musk‘s conversational search tool Grok, which is currently under global investigation for allegedly enabling the creation of synthetic sexualized images of children, showed no age verification or content filtering measures.
Lisa Given, director of RMIT University’s Centre for Human-AI Information Environments, commented on the findings, stating that it was unsurprising as “most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls.” She added, “It feels as though we’re beta testing all of these things for these companies and they’re trying to see how far society is willing to be pushed.”
See also
Tesseract Launches Site Manager and PRISM Vision Badge for Job Site Clarity
Affordable Android Smartwatches That Offer Great Value and Features
Russia”s AIDOL Robot Stumbles During Debut in Moscow
AI Technology Revolutionizes Meat Processing at Cargill Slaughterhouse















































