Australia’s artificial intelligence (AI) landscape is under scrutiny following remarks from leading AI researcher Toby Walsh, who expressed concerns that the country’s lack of regulatory framework is jeopardizing young people, suggesting they are being “sacrificed for the profits of big tech.” Walsh’s comments came in the wake of the Australian government’s decision to abandon plans for a dedicated advisory body of AI experts. Instead, the government has proposed a National AI Plan that prioritizes investment in data centers, telecommunications infrastructure, and workforce training.
The new plan includes the establishment of an “AI Safety Institute,” which is currently in the recruitment phase, as well as some internal transparency measures for public sector AI applications. However, early results in the transparency initiative have been less than satisfactory.
In the context of global AI regulation, Australia’s approach is comparatively cautious. The European Union’s AI Act serves as a notable benchmark, with provisions that explicitly prohibit the exploitation of vulnerable individuals through AI systems. Nevertheless, Europe faces challenges in implementing regulations for high-risk AI applications that fall outside these prohibitions.
Countries in Australia’s region, such as South Korea, Japan, and Taiwan, are also moving forward with legislation aimed at granting governments the necessary authority to act when deemed essential. Yet, industry pushback has been anticipated in these nations as well.
On the other hand, the regulatory landscape in both the United States and the United Kingdom remains fragmented. The U.S. government, under former President Donald Trump, has largely prohibited state-level regulations on private AI use, though it maintains stringent safeguards for federal applications. Similarly, the UK has adopted a disjointed approach, struggling to establish a coherent regulatory framework while attempting to implement non-legal technical safeguards through initiatives like the newly formed AI Safety Agency.
The differing regulatory strategies among nations highlight a longstanding dilemma articulated by English technology scholar David Collingridge: when regulatory changes are easy to implement, the need for them is often not foreseen; conversely, when the necessity for change becomes apparent, the process tends to be costly and complex. Given its limited influence as a global AI player, Australia’s capacity to shape international policies is constrained, especially in contrast to its prominence in sectors like mining.
Current Australian regulations lean heavily on existing frameworks. In a recent address, Australia’s Assistant Minister for Science, Technology and the Digital Economy, Andrew Charlton, emphasized the importance of “regulatory certainty” grounded in clear principles with broad support. This sentiment aligns with the government’s assertion that established Australian laws can encompass AI and emerging technologies, citing consumer protection laws as applicable to misleading and deceptive practices.
However, the Australian government has acknowledged existing regulatory challenges that remain unresolved. As identified in 2023, the complexities inherent in AI systems, which can operate semi-autonomously, create difficulties in attributing liability and responsibility for risks or harms using traditional legal frameworks. These limitations have yet to be systematically addressed.
The current regulatory landscape is characterized by a patchwork of at least 21 mandatory or quasi-mandatory federal and state policies governing AI use in the public sector. Courts have had limited opportunities to clarify these issues, as few test cases have emerged in vital areas like negligence, discrimination, and consumer law.
While the government has pledged to monitor AI development and deployment and respond to emerging challenges, questions remain about the effectiveness of this monitoring. Will the government genuinely empower all agencies to take responsibility for AI, and can it effectively address complexities such as privacy and anti-discrimination, which require both funding and coordination?
The future of AI regulation in Australia appears uncertain. A potential shift in U.S. government policy following the 2028 elections could reshape Australia’s regulatory approach, much like the abandonment of proposed mandatory AI guardrails during the early Trump administration. The ongoing reliance on a laissez-faire approach raises questions about whether it can genuinely foster predictability amid stalled regulatory processes.
As Australia navigates this uncertain landscape, the government seems inclined to expect courts, agencies, businesses, and individuals to adapt existing laws to new technological realities. While some hope exists for improved regulation of automated decision-making in public sectors—particularly in light of issues raised during the Robodebt Royal Commission—much of the response to AI regulation appears to lean toward a “wait and see” strategy.
See also
AI-Driven Compliance Fuels 28% Operational Efficiency for Brands in Complex European Ecommerce
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case



















































