On December 17, the Senate passed the National Defense Authorization Act (NDAA), which now awaits the President’s signature. This extensive 3,000-page legislation outlines the United States’ defense policy for 2026, authorizing $8 billion in defense spending across various programs. A focal point of the NDAA is the integration of artificial intelligence (AI) within the military framework, emphasizing the need for rapid deployment while also addressing the associated risks.
Unlike the approach taken in President Trump’s AI Action Plan, the NDAA acknowledges the potential hazards of swift AI integration. It mandates the establishment of new processes to evaluate risks and implement governance frameworks within the Pentagon and the intelligence community. These frameworks aim to identify, measure, and mitigate threats posed by advanced AI systems. Furthermore, the legislation imposes restrictions to curtail the expansion of China’s AI industry, including new provisions under the Outbound Investment regime, managed by the U.S. Treasury Department, which oversees investments in technologies deemed critical.
The NDAA outlines specific provisions related to AI and cybersecurity that could significantly impact the defense sector. Notably, it directs the creation of new committees aimed at overseeing the development and assessment of AI systems. Section 1533 tasks the Secretary of Defense with establishing a cross-functional team for AI model assessment by June 2026. This team will develop a department-wide assessment framework by June 2027, encompassing standards for performance, testing procedures, security requirements, and ethical principles surrounding AI usage.
Moreover, Section 1534 mandates the formation of a task force to create AI sandbox environments, which are isolated computing zones designed for experimentation and training. This initiative aims to enhance the Pentagon’s capability to develop and evaluate AI technologies effectively. Section 1535 introduces the Artificial Intelligence Futures Steering Committee, which will guide the long-term AI strategy within the Pentagon by identifying emerging technologies and recommending investments in research and ethical frameworks.
Section 6602 further instructs the Chief Information Officer and Chief Artificial Intelligence Officer of the intelligence community to identify and share commonly used AI tools across various elements without significant modification. However, the section lacks detailed guidance on how these evaluations should be undertaken. Additionally, Section 6603 addresses the hosting of publicly available AI models, such as ChatGPT, in classified environments, calling for policies to ensure rigorous testing standards in terms of performance and safety.
The NDAA also revises the contracting process for AI technologies. Section 6602(d) directs the Chief Information Officer of the intelligence community to develop model contractual terms aimed at minimizing reliance on proprietary information. Although these terms are not mandatory, they are expected to influence government contracting practices in the AI sector. The legislation also includes a provision that prevents intelligence community officers from directing vendors to alter AI models to favor specific viewpoints, echoing elements of Trump’s Executive Order on AI.
In light of concerns regarding Chinese-owned generative AI systems, Section 1532 prohibits the Pentagon from using or acquiring AI systems from nations considered a threat, such as China and Russia. It also prevents contractors from utilizing these technologies, although waivers may be granted for specific national security-related activities. Further, Section 8521 amends the Defense Production Act, empowering the Treasury to tighten regulations on U.S. investments in sensitive technologies within these countries.
The NDAA expands its cybersecurity measures as well, requiring enhanced safeguards for AI-related systems. Section 1512 mandates the Pentagon to develop a comprehensive cybersecurity policy for AI and machine learning systems within 180 days of enactment. This policy will address risks such as adversarial attacks and unauthorized access. Section 1511 strengthens cybersecurity requirements for secure mobile devices used by senior officials, requiring encryption and continuous monitoring capabilities.
Additional provisions aim to improve coordination within cyber capabilities. Section 1501 seeks to establish processes for budget planning specifically for Cyber Mission Force operations, ensuring that these capabilities are adequately resourced. Section 1503 directs the creation of a framework for assessing technical debt within IT systems, while Section 1504 establishes a working group to enhance data interoperability across the Department of Defense.
Looking ahead, several notable provisions were omitted from the NDAA discussions, including a proposed AI moratorium and controls on semiconductor chip exports. Although efforts to include a federal standard for AI were made, bipartisan support was lacking. As such, the NDAA reflects a balancing act between promoting innovation in AI and addressing the associated risks, particularly those posed by foreign adversaries. The ongoing legislative landscape suggests that debates over these critical issues will continue as the U.S. navigates the complexities of modern defense technology.
See also
89% of Schools Report Cyber Incidents Amid AI Threats, 40% Feel Unprepared
AI-Driven Cyber Attacks Surge: Deepfake Phishing and Smishing Rise, Warns NCSC
Safe Pro Group Upgrades AI Algorithms for Enhanced Drone Operations in GPS-Denied Areas
Cybersecurity Leaders Stress AI Governance as Global Cybercrime Damages to Exceed $23 Trillion by 2027



















































