Connect with us

Hi, what are you looking for?

AI Regulation

Australia’s AI Plan Lacks Dedicated Regulation, Experts Warn of Growing Risks

Australia’s new AI Plan lacks mandatory regulations, raising concerns among experts as incidents of AI misuse, including deepfake exploitation, surge.

Australia’s much-anticipated National AI Plan, unveiled earlier this month, has received a mixed reception from experts concerned about its lack of specificity and measurable targets. The plan pivots away from previously promised mandatory AI safeguards, instead presenting a broad roadmap for developing an “AI-enabled economy.” Critics argue that in the absence of dedicated AI regulations, the most vulnerable Australians are at risk of harm as incidents of AI misuse continue to rise globally.

Reports of AI-related harms, from cybercrime exploiting deepfakes to disinformation campaigns powered by generative AI, have been increasing. In Australia, the alarming spread of AI-generated child sexual abuse material underscores the inadequacy of current laws to protect victims effectively. Without robust regulations, experts warn that existing legal frameworks may exacerbate injustices rather than mitigate them.

The new AI plan does not include provisions for a standalone AI Act or concrete recommendations for reforming existing legislation. Instead, it proposes the establishment of an AI Safety Institute along with voluntary codes of conduct. Assistant Minister for Science, Technology and the Digital Economy, Andrew Charlton, stated that the institute will collaborate with regulators to ensure the safe integration of AI technologies. However, the institute’s powers are largely advisory, raising concerns about its ability to enforce meaningful oversight.

Australia’s history of attributing failures to algorithms, exemplified by the Robodebt scandal, highlights the inadequacy of current legal protections. As it stands, existing laws are insufficient to address the evolving range of harms associated with AI technologies. The national plan may inadvertently amplify systemic injustices rather than resolve them.

Holding tech companies accountable for AI-related harms presents a significant challenge. Major players in the technology sector, such as Google and OpenAI, are leveraging “fair use” provisions in U.S. copyright law to justify data scraping practices. Social media giants like Meta and TikTok exploit legal loopholes, including broad immunity under the U.S. Communications Decency Act, to evade liability for harmful content. Moreover, many companies are utilizing special purpose acquisition companies, or shell entities, to navigate antitrust regulations designed to curb anti-competitive behavior.

Australia’s national AI plan adopts a “technology-neutral” approach, suggesting that existing laws are adequate to mitigate potential AI-related risks. This perspective maintains that privacy breaches, consumer fraud, discrimination, copyright issues, and workplace safety can be addressed with minimal regulation, reserving intervention for cases deemed necessary. The AI Safety Institute is expected to “monitor and advise” on these matters.

Current laws referenced as adequate include the Privacy Act, Australian Consumer Law, anti-discrimination statutes, and sector-specific regulations, particularly those in healthcare. While this may appear to provide comprehensive oversight, significant legal gaps remain, particularly concerning generative AI, deepfakes, and synthetic data used for AI training. Foundational issues, such as algorithmic bias, autonomous decision-making, and environmental risks, are compounded by a lack of transparency and accountability. In this rapidly changing landscape, big tech companies often exploit legal uncertainties and lobbying efforts to delay compliance and evade responsibility.

Experts warn that Australia risks becoming a jurisdiction of choice for companies seeking to exploit weaker regulatory environments, a phenomenon known as “regulatory arbitrage.” To combat this, there is a call for global consistency and harmonization of relevant laws. Two frameworks in particular may serve as valuable guides: the EU AI Act and Aotearoa New Zealand’s Māori AI Governance framework. These frameworks provide structured approaches to governing AI that Australia could emulate.

The EU AI Act, celebrated as the world’s first AI-specific legislation, establishes clear rules governing permissible AI activities and assigns legal obligations based on the potential societal risks posed by various AI systems. It incorporates enforcement mechanisms, including specific financial penalties and governance bodies at both EU and national levels. In contrast, the Māori AI Governance Framework emphasizes Indigenous data sovereignty principles, addressing the unique needs of Māori communities in relation to AI technologies. Its four pillars provide comprehensive actions aimed at safeguarding community health and safety.

While the EU AI Act and the Māori Framework articulate clear values and convert them into enforceable protections, Australia’s plan claims to reflect “Australian values” but falls short of offering the necessary regulatory framework or cultural specificity to uphold them. Legal experts argue that Australia needs robust accountability structures for AI that do not leave individuals to navigate outdated laws against well-resourced corporations alone.

The choice facing Australia is stark: pursue an “AI-enabled economy” at all costs or prioritize community safety and justice in the deployment of transformative technologies. The path taken will shape not only technological advancement but societal values and protections for years to come.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Analysts warn that unchecked AI enthusiasm from companies like OpenAI and Nvidia could mask looming market instability as geopolitical tensions escalate and regulations lag.

AI Business

The global software development market is projected to surge from $532.65 billion in 2024 to $1.46 trillion by 2033, driven by AI and cloud...

AI Technology

AI is transforming accounting by 2026, with firms like BDO leveraging intelligent systems to enhance client relationships and drive predictable revenue streams.

AI Generative

Instagram CEO Adam Mosseri warns that the surge in AI-generated content threatens authenticity, compelling users to adopt skepticism as trust erodes.

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

AI Government

India's AI workforce is set to double to over 1.25 million by 2027, but questions linger about workers' readiness and job security in this...

AI Education

EDCAPIT secures $5M in Seed funding, achieving 120K page views and expanding its educational platform to over 30 countries in just one year.

Top Stories

Health care braces for a payment overhaul as only 3 out of 1,357 AI medical devices secure CPT codes amid rising pressure for reimbursement...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.