Australia’s much-anticipated National AI Plan, unveiled earlier this month, has received a mixed reception from experts concerned about its lack of specificity and measurable targets. The plan pivots away from previously promised mandatory AI safeguards, instead presenting a broad roadmap for developing an “AI-enabled economy.” Critics argue that in the absence of dedicated AI regulations, the most vulnerable Australians are at risk of harm as incidents of AI misuse continue to rise globally.
Reports of AI-related harms, from cybercrime exploiting deepfakes to disinformation campaigns powered by generative AI, have been increasing. In Australia, the alarming spread of AI-generated child sexual abuse material underscores the inadequacy of current laws to protect victims effectively. Without robust regulations, experts warn that existing legal frameworks may exacerbate injustices rather than mitigate them.
The new AI plan does not include provisions for a standalone AI Act or concrete recommendations for reforming existing legislation. Instead, it proposes the establishment of an AI Safety Institute along with voluntary codes of conduct. Assistant Minister for Science, Technology and the Digital Economy, Andrew Charlton, stated that the institute will collaborate with regulators to ensure the safe integration of AI technologies. However, the institute’s powers are largely advisory, raising concerns about its ability to enforce meaningful oversight.
Australia’s history of attributing failures to algorithms, exemplified by the Robodebt scandal, highlights the inadequacy of current legal protections. As it stands, existing laws are insufficient to address the evolving range of harms associated with AI technologies. The national plan may inadvertently amplify systemic injustices rather than resolve them.
Holding tech companies accountable for AI-related harms presents a significant challenge. Major players in the technology sector, such as Google and OpenAI, are leveraging “fair use” provisions in U.S. copyright law to justify data scraping practices. Social media giants like Meta and TikTok exploit legal loopholes, including broad immunity under the U.S. Communications Decency Act, to evade liability for harmful content. Moreover, many companies are utilizing special purpose acquisition companies, or shell entities, to navigate antitrust regulations designed to curb anti-competitive behavior.
Australia’s national AI plan adopts a “technology-neutral” approach, suggesting that existing laws are adequate to mitigate potential AI-related risks. This perspective maintains that privacy breaches, consumer fraud, discrimination, copyright issues, and workplace safety can be addressed with minimal regulation, reserving intervention for cases deemed necessary. The AI Safety Institute is expected to “monitor and advise” on these matters.
Current laws referenced as adequate include the Privacy Act, Australian Consumer Law, anti-discrimination statutes, and sector-specific regulations, particularly those in healthcare. While this may appear to provide comprehensive oversight, significant legal gaps remain, particularly concerning generative AI, deepfakes, and synthetic data used for AI training. Foundational issues, such as algorithmic bias, autonomous decision-making, and environmental risks, are compounded by a lack of transparency and accountability. In this rapidly changing landscape, big tech companies often exploit legal uncertainties and lobbying efforts to delay compliance and evade responsibility.
Experts warn that Australia risks becoming a jurisdiction of choice for companies seeking to exploit weaker regulatory environments, a phenomenon known as “regulatory arbitrage.” To combat this, there is a call for global consistency and harmonization of relevant laws. Two frameworks in particular may serve as valuable guides: the EU AI Act and Aotearoa New Zealand’s Māori AI Governance framework. These frameworks provide structured approaches to governing AI that Australia could emulate.
The EU AI Act, celebrated as the world’s first AI-specific legislation, establishes clear rules governing permissible AI activities and assigns legal obligations based on the potential societal risks posed by various AI systems. It incorporates enforcement mechanisms, including specific financial penalties and governance bodies at both EU and national levels. In contrast, the Māori AI Governance Framework emphasizes Indigenous data sovereignty principles, addressing the unique needs of Māori communities in relation to AI technologies. Its four pillars provide comprehensive actions aimed at safeguarding community health and safety.
While the EU AI Act and the Māori Framework articulate clear values and convert them into enforceable protections, Australia’s plan claims to reflect “Australian values” but falls short of offering the necessary regulatory framework or cultural specificity to uphold them. Legal experts argue that Australia needs robust accountability structures for AI that do not leave individuals to navigate outdated laws against well-resourced corporations alone.
The choice facing Australia is stark: pursue an “AI-enabled economy” at all costs or prioritize community safety and justice in the deployment of transformative technologies. The path taken will shape not only technological advancement but societal values and protections for years to come.
See also
OMB Issues New Guidelines for AI Procurement: Agencies Must Ensure LLMs Are Unbiased by March 11
Anthropic Disrupts First AI-Driven Cyber-Espionage Campaign Targeting 30 Firms
Thomson Reuters Launches CoCounsel Legal, Reducing Document Review Time by 63%
Pennsylvania Advances AI Regulation Bill as Healthcare Integrates AI Tools Daily
TRM Labs and Sphinx Partner to Automate AML Compliance with AI Agents, Enhancing Fraud Detection


















































