Connect with us

Hi, what are you looking for?

AI Cybersecurity

NDAA Approves $8B for AI, Enforces New Cybersecurity Measures Amid Rising Risks

Senate passes NDAA authorizing $8B for defense AI integration, mandating new cybersecurity measures and risk governance to counter threats from China and Russia

On December 17, the Senate passed the National Defense Authorization Act (NDAA), which now awaits the President’s signature. This extensive 3,000-page legislation outlines the United States’ defense policy for 2026, authorizing $8 billion in defense spending across various programs. A focal point of the NDAA is the integration of artificial intelligence (AI) within the military framework, emphasizing the need for rapid deployment while also addressing the associated risks.

Unlike the approach taken in President Trump’s AI Action Plan, the NDAA acknowledges the potential hazards of swift AI integration. It mandates the establishment of new processes to evaluate risks and implement governance frameworks within the Pentagon and the intelligence community. These frameworks aim to identify, measure, and mitigate threats posed by advanced AI systems. Furthermore, the legislation imposes restrictions to curtail the expansion of China’s AI industry, including new provisions under the Outbound Investment regime, managed by the U.S. Treasury Department, which oversees investments in technologies deemed critical.

The NDAA outlines specific provisions related to AI and cybersecurity that could significantly impact the defense sector. Notably, it directs the creation of new committees aimed at overseeing the development and assessment of AI systems. Section 1533 tasks the Secretary of Defense with establishing a cross-functional team for AI model assessment by June 2026. This team will develop a department-wide assessment framework by June 2027, encompassing standards for performance, testing procedures, security requirements, and ethical principles surrounding AI usage.

Moreover, Section 1534 mandates the formation of a task force to create AI sandbox environments, which are isolated computing zones designed for experimentation and training. This initiative aims to enhance the Pentagon’s capability to develop and evaluate AI technologies effectively. Section 1535 introduces the Artificial Intelligence Futures Steering Committee, which will guide the long-term AI strategy within the Pentagon by identifying emerging technologies and recommending investments in research and ethical frameworks.

Section 6602 further instructs the Chief Information Officer and Chief Artificial Intelligence Officer of the intelligence community to identify and share commonly used AI tools across various elements without significant modification. However, the section lacks detailed guidance on how these evaluations should be undertaken. Additionally, Section 6603 addresses the hosting of publicly available AI models, such as ChatGPT, in classified environments, calling for policies to ensure rigorous testing standards in terms of performance and safety.

The NDAA also revises the contracting process for AI technologies. Section 6602(d) directs the Chief Information Officer of the intelligence community to develop model contractual terms aimed at minimizing reliance on proprietary information. Although these terms are not mandatory, they are expected to influence government contracting practices in the AI sector. The legislation also includes a provision that prevents intelligence community officers from directing vendors to alter AI models to favor specific viewpoints, echoing elements of Trump’s Executive Order on AI.

In light of concerns regarding Chinese-owned generative AI systems, Section 1532 prohibits the Pentagon from using or acquiring AI systems from nations considered a threat, such as China and Russia. It also prevents contractors from utilizing these technologies, although waivers may be granted for specific national security-related activities. Further, Section 8521 amends the Defense Production Act, empowering the Treasury to tighten regulations on U.S. investments in sensitive technologies within these countries.

The NDAA expands its cybersecurity measures as well, requiring enhanced safeguards for AI-related systems. Section 1512 mandates the Pentagon to develop a comprehensive cybersecurity policy for AI and machine learning systems within 180 days of enactment. This policy will address risks such as adversarial attacks and unauthorized access. Section 1511 strengthens cybersecurity requirements for secure mobile devices used by senior officials, requiring encryption and continuous monitoring capabilities.

Additional provisions aim to improve coordination within cyber capabilities. Section 1501 seeks to establish processes for budget planning specifically for Cyber Mission Force operations, ensuring that these capabilities are adequately resourced. Section 1503 directs the creation of a framework for assessing technical debt within IT systems, while Section 1504 establishes a working group to enhance data interoperability across the Department of Defense.

Looking ahead, several notable provisions were omitted from the NDAA discussions, including a proposed AI moratorium and controls on semiconductor chip exports. Although efforts to include a federal standard for AI were made, bipartisan support was lacking. As such, the NDAA reflects a balancing act between promoting innovation in AI and addressing the associated risks, particularly those posed by foreign adversaries. The ongoing legislative landscape suggests that debates over these critical issues will continue as the U.S. navigates the complexities of modern defense technology.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

Global semiconductor giants like TSMC and Samsung face capped innovation under new U.S.-China export controls, limiting advanced tech upgrades and reshaping supply chains.

AI Technology

China's draft regulations mandate AI providers like Baidu and Tencent to monitor emotional addiction in chatbots, aiming to prevent user dependency and enhance mental...

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

Top Stories

Prime Minister Modi to inaugurate the India AI Impact Summit, Feb 15-20, 2026, uniting over 50 global CEOs from firms like Google DeepMind and...

AI Finance

Nvidia's shares rise 1% as the company secures over 2 million orders for H200 AI chips from Chinese firms, anticipating production ramp-up in 2024.

Top Stories

Nvidia faces surging demand from Chinese firms with 2 million H200 chip orders for 2026, straining semiconductor ETFs amid evolving regulatory risks.

AI Regulation

As the U.S. enacts the Cyber Incident Reporting for Critical Infrastructure Act, firms face 72-hour reporting mandates, elevating compliance costs and legal risks.

AI Regulation

California implements new AI regulations in 2026, including protections for minors and accountability for deepfake content, positioning itself as a national leader in AI...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.