Connect with us

Hi, what are you looking for?

AI Technology

Australia Demands Age Verification for AI Services by March 9, Targets Apple and Google

Australia mandates major tech firms like Apple and Google to implement age verification for AI services by March 9 or face penalties up to A$49.5 million.

SYDNEY – Australia’s digital safety authority is threatening to take action against major tech companies such as Apple and Google unless artificial intelligence platforms implement age verification systems by the March 9 deadline. This development marks a significant step in the country’s efforts to regulate AI technologies, following its status as the first nation to ban social media access for teenagers due to mental health concerns.

The Australian internet watchdog’s warning follows a Reuters investigation revealing that more than half of popular AI services have not publicly outlined compliance strategies ahead of the deadline. As one of the world’s most ambitious regulatory efforts, this initiative seeks to address increasing legal challenges faced by AI companies, particularly those accused of failing to prevent or even promoting self-harm and violence among vulnerable populations.

Under the new regulations set to take effect on March 9, internet platforms operating in Australia, including AI tools like OpenAI’s ChatGPT and various companion chatbots, must ensure that users under 18 are blocked from accessing pornographic material, extreme violence, self-harm content, and information related to eating disorders. Non-compliance could result in penalties of up to A$49.5 million (approximately $35 million).

A spokesperson for the eSafety Commissioner stated, “eSafety will use the full range of our powers where there is non-compliance,” emphasizing the role of key access points such as search engines and app stores. This regulatory move follows reports of certain AI platforms being involved in legal cases related to wrongful death, particularly concerning interactions with young users. Recently, OpenAI disclosed that it had disabled the ChatGPT account of a teenage mass shooting suspect in Canada months prior to the incident, although law enforcement was not informed.

While Australia has not yet documented incidents of chatbot-related violence or self-harm, concerns have been raised about children, some as young as 10, spending up to six hours daily engaging with AI-driven conversational tools. The safety commissioner has expressed apprehension that “AI companies are leveraging emotional manipulation, anthropomorphism, and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage.”

Apple, the leading app store operator, has yet to respond to inquiries but noted on its website that it would employ “reasonable methods” to prevent minors from downloading adult-rated apps, without detailing these measures. Google, which holds a dominant position in Australia’s search market, also declined to comment through a spokesperson.

Jennifer Duxbury, policy director at the digital industry organization DIGI, played a significant role in drafting the AI regulations. She highlighted that eSafety is actively working to inform chatbot services about the new requirements, but emphasized that “ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them.”

Amid growing scrutiny, the Reuters analysis found that just a week before the compliance deadline, only nine of the 50 most widely used text-based AI products had established or announced age verification systems. An additional 11 platforms had implemented comprehensive content filters or planned to block all Australian users entirely, thus complying with the law by preventing restricted content from reaching any users. However, 30 platforms showed no visible efforts toward compliance.

Major conversational search tools, including ChatGPT, Replika, and Anthropic’s Claude, have begun implementing age verification or comprehensive filtering systems. In contrast, Character.AI has restricted open-ended conversations for users under 18. Several companion chatbot companies, such as Candy AI, Pi, Kindroid, and Nomi, have indicated intentions to comply without disclosing specific details, while HammerAI announced it would initially block its services from Australia to meet the requirements.

Despite these measures, compliant companies represent a small fraction of the market. Among companion chatbots, approximately three-quarters lacked functioning or planned filtering and age verification systems, with one-sixth failing to provide published email addresses for reporting suspected violations, another mandatory requirement under the regulations. Notably, Elon Musk‘s conversational search tool Grok, which is currently under global investigation for allegedly enabling the creation of synthetic sexualized images of children, showed no age verification or content filtering measures.

Lisa Given, director of RMIT University’s Centre for Human-AI Information Environments, commented on the findings, stating that it was unsurprising as “most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls.” She added, “It feels as though we’re beta testing all of these things for these companies and they’re trying to see how far society is willing to be pushed.”

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Finance

AI tools enhance data preparation for finance professionals, boosting efficiency by 30% and enabling deeper insights with automated visualizations and anomaly detection.

AI Education

Google DeepMind launches Nano Banana 2, delivering AI image generation at "Flash speed" with enhanced quality and real-time knowledge across multiple platforms.

AI Marketing

Verint Systems posts stronger-than-expected earnings with EPS exceeding forecasts, boosting investor optimism amid an increasingly competitive AI landscape.

AI Regulation

FIFA proposes new AI regulations to combat algorithmic exclusion in football scouting, aiming for fair talent evaluation and transparency in global player development.

AI Business

AI-powered bots are set to disrupt the banking sector by challenging a $10 billion 'lazy tax', as firms like Commonwealth Bank explore AI-driven consumer...

AI Technology

AI's shift to intent engineering enhances user-AI interactions by prioritizing contextual understanding over prompt precision, fostering collaborative problem-solving.

AI Research

Vertex Group launches a ₹100 crore Responsible AI Lab in Gurugram to drive ethical AI innovation and support its ₹1,000 crore valuation goal.

AI Generative

Microsoft unveils Copilot Canvas, an AI-driven workspace featuring real-time generative image capabilities and advanced collaboration tools, enhancing team productivity.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.