Connect with us

Hi, what are you looking for?

AI Technology

Australia Unveils New AI Guidance, Condensing Ten Standards into Six Essential Practices

Australia’s new AI guidance streamlines ten voluntary standards into six essential practices, emphasizing accountability and risk management for developers and deployers.

New guidance for the development and deployment of Artificial Intelligence (AI) has replaced the existing voluntary standard in Australia, casting doubt on previously proposed mandatory regulatory measures.

In late October, the Australian Department of Industry, Science and Resources along with the National AI Centre published the Guidance for AI Adoption (GfAA). This new framework comes just a year after the introduction of the Voluntary AI Safety Standard (VAISS), raising questions about the future of mandatory guidelines for AI deployment, which had been anticipated by stakeholders and industry experts.

The GfAA is framed as a response to the rapid technological advancements and evolving governance landscape observed in the past year, as well as feedback from industry participants. While the VAISS served as a non-binding guideline for AI best practices in Australia, the GfAA condenses its ten principles into six essential practices aimed at both AI developers and deployers.

Unlike the broader, principles-based nature of the VAISS, the GfAA adopts a more prescriptive approach, placing strong emphasis on the entire lifecycle of AI systems—from development and deployment to ongoing assessment. The eight AI Ethics Principles developed by Australia remain integral to this new guidance, continuing to inform public policy concerning the secure and reliable use of AI technologies.

Among the main changes, the GfAA specifies six essential practices: establishing accountability, understanding impacts, measuring and managing risks, ensuring transparency, testing and monitoring systems, and maintaining human oversight. These practices replace the previous ten guardrails of the VAISS while still reflecting their underlying intent. For instance, the GfAA emphasizes accountability throughout the AI lifecycle and the necessity of stakeholder engagement, with a specific focus on fairness and rights.

Two versions of the GfAA have been made available: a “Foundations” version tailored for organizations at the outset of their AI journey, and an “Implementation Practices” version designed for those with more advanced needs. Notably, both versions do not distinguish between small and large businesses, emphasizing AI fluency across the board. However, the more detailed Implementation Practices version may offer greater guidance for larger enterprises, particularly those that have already begun shaping their internal AI policies in accordance with the VAISS and the Ethics Principles.

The GfAA is accompanied by a “crosswalk” document that maps the corresponding provisions between the VAISS and the GfAA, serving as a useful tool for organizations that previously aligned their governance protocols with the earlier standard. Further insight can also be drawn from the AI policies and contractual standards recently published for the Australian Public Service (APS). As a significant player in the market, the APS’s approach is likely to impact AI adoption and contracting practices across the country.

At the time the VAISS was released, the government also presented a proposals paper for introducing mandatory guardrails specifically for high-risk AI applications. This paper aimed to define high-risk AI, propose mandatory guidelines for such systems, and explore options for effective regulation. The mandatory guidelines were essentially a reiteration of the VAISS, with the notable addition of requiring AI deployers to undertake conformity assessments to demonstrate compliance.

With the GfAA now supplanting the VAISS, uncertainty surrounds the future of these mandatory guardrails. As of now, the government has not indicated plans for a mandatory equivalent to the GfAA. This ambiguity aligns with views from some quarters, including the Productivity Commission’s interim report published earlier this year, suggesting that imposing mandatory regulations could potentially stifle innovation and economic growth.

The GfAA is currently accessible for organizations to adopt, although it remains non-binding. The National AI Centre plans to introduce additional complementary tools and resources over the next year. For organizations that have structured their AI governance policies based on the VAISS, the Department of Industry, Science and Resources confirms that the Implementation Practices version of the GfAA builds upon these earlier principles, offering a pathway for continued reliance on existing policies while allowing for future revisions based on new guidance.

As Australia looks toward the future of AI governance, it is unlikely that technology-specific legislation will emerge before 2026. Instead, organizations engaging in AI must continue to navigate the existing largely technology-neutral laws, leveraging the non-binding guidance currently available. For further information on applicable laws and regulations, White & Case’s global regulatory tracker, AI Watch, provides comprehensive insights into the evolving landscape.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Corning and Meta begin a $6B partnership to expand optical cable production in North Carolina, boosting U.S. manufacturing and AI infrastructure growth.

AI Technology

Illia Polosukhin of NEAR Foundation warns that traditional AI services risk exposing sensitive data, advocating for blockchain's trust layer and cryptocurrency to revolutionize global...

Top Stories

AI integration in patent management accelerates as global filings exceed 3.55 million in 2023, highlighting urgent needs for streamlined workflows and specialized tools.

AI Marketing

SoundHound AI partners with ACG to introduce its agentic AI platform to telecom operators, targeting a 100% revenue growth by 2025 through enhanced customer...

AI Cybersecurity

Anthropic's Mythos AI successfully identified software vulnerabilities 83% of the time, prompting a reevaluation of cybersecurity risks and the decision against its public release.

AI Tools

Microsoft's Rajesh Jha claims AI agents could require software licenses, potentially driving demand for 50 licenses per 10 human employees in a radical SaaS...

AI Marketing

Goodfirms reveals 89% of brands appear in AI search results, yet only 14% track visibility, leaving them optimizing in the dark as traffic shifts.

AI Cybersecurity

Anthropic's Mythos AI uncovers thousands of security flaws with an 83% exploit success rate, heightening urgent concerns over AI's potential threats.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.