The Australian Defence Force (ADF) has released a document titled “Policy Settings for Responsible Use of Artificial Intelligence in Defence,” outlining its approach to integrating AI into its operations while prioritizing safety and responsibility. The policy comes in response to the increasingly complex strategic and operational environments noted in the 2024 National Defence Strategy, which highlights growing strategic competition as a core feature of Australia’s security landscape.
According to the ADF, AI and other emerging technologies are becoming vital in this competitive context. Characterized as a general-purpose technology, AI is penetrating various government and commercial sectors, enhancing infrastructure, products, and services. The ADF acknowledges that while AI holds tremendous potential for improving accuracy, efficiency, speed, and safety in defence functions, public trust is crucial for successful adoption.
In line with global military trends, the ADF plans to utilize AI primarily as a decision-support tool, ensuring that human personnel retain ultimate control over critical decisions. The force is currently investigating how AI can streamline both civilian and military operations, including data analysis, automating repetitive or hazardous tasks, optimizing logistics, and enhancing physical and cyber security.
The ADF’s policy encompasses all facets of AI technology, including design, development, deployment, and decommissioning, applying to both combat capabilities and enabling functions. The principles outlined in the policy fall into three key categories: lawfulness, adherence to values-based principles, and proportionate controls.
Under the lawfulness category, the ADF commits to ensuring its AI usage complies with domestic and international laws, including humanitarian and human rights regulations. An “accountable officer” will be designated to oversee each AI capability through its lifecycle, ensuring that accountability is maintained at all stages. The ADF noted, “As the AI capability moves through the life cycle, the accountable officer will change; however, all officials will remain accountable for the decisions or contributions they made in any stage.” This includes a documented transfer of accountability whenever AI capabilities progress from acquisition to operational service.
When it comes to AI applications in weapons systems, the ADF has pledged adherence to the Geneva Conventions, emphasizing thorough reviews of all weapons and methods of warfare that AI integration may impact.
In its commitment to values-based principles, the ADF aims to focus on accountability, bias and harm mitigation, and the explainability of AI inputs and outputs. The force has underscored the importance of a reliable and secure approach to all AI-related use cases. “Defence will apply values-based principles to ensure that our use of AI technology is in line with Australia’s high legal and ethical standards and public expectations,” the ADF stated.
The third principle, proportionate controls, involves implementing risk-based control measures that correspond to potential consequences and unintended outcomes. The ADF has outlined that these measures will include layered policies, processes, training, and procedures, as well as ongoing evaluations to identify, assess, and mitigate risks throughout the technology lifecycle.
The ADF’s proactive stance on AI reflects a broader trend among militaries worldwide, as they seek to harness the capabilities of advanced technologies while emphasizing ethical considerations. As AI continues to evolve, the ADF’s framework could serve as a model for balancing innovation with safety and accountability in defence operations.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































