In a significant bipartisan step towards bolstering national security around artificial intelligence, Senators Todd Young (R-Ind.) and Mark Kelly (D-Ariz.) have introduced the Advanced AI Security Readiness Act>. This new legislation aims to direct the National Security Agency (NSA) to proactively tackle threats posed by foreign adversaries against U.S. AI technologies.
The act mandates the NSA to develop a comprehensive AI security playbook, which will serve as a roadmap to identify and mitigate vulnerabilities in both AI technology and supply chains. The NSA’s AI Security Center is tasked with this responsibility, ensuring that strategies are in place to safeguard advanced American AI systems against potential exploitation.
“America’s leadership in advanced technology depends on our ability to protect it. As our foreign adversaries race to steal and exploit cutting-edge AI systems, we must stay ahead of these threats,” stated Young in a press release. “The Advanced AI Security Readiness Act will ensure the intelligence professionals at NSA have the tools and direction needed to safeguard U.S. innovation and preserve America’s technology advantages.”
The legislation emphasizes the need for a focused approach to cybersecurity challenges specific to AI systems. The NSA’s playbook is expected to highlight critical elements of the AI supply chain that are especially vulnerable to threat actors. It aims to outline strategies for handling cyber threats, including protective measures for model weights and protocols to counter insider threats and cyberespionage.
See also
Google DeepMind Launches AI Research Lab in Singapore to Boost APAC Innovation“AI increasingly powers our defense, intelligence, critical infrastructure, scientific innovation, and much of our economy. If it’s vulnerable, we’re vulnerable,” Kelly remarked. “This bipartisan legislation gets the NSA prepared to spot attacks early and defend our country’s AI innovation from anyone trying to exploit it. As AI evolves, we need to stay ahead of the challenges it brings to keep Americans safe.”
The collaboration is not limited to the NSA alone; the bill calls for partnerships with notable AI developers and researchers. The agency will engage in interviews with subject matter experts, host roundtable discussions, and visit relevant facilities. Moreover, it will collaborate with institutions like the Department of Energy-run national laboratories and other federally funded R&D centers that have specialized knowledge in AI security. Key partners also include the Commerce Department’s Bureau of Industry and Security, the National Institute of Standards and Technology’s Center for AI Standards and Innovation, the Department of Homeland Security, and the Department of Defense.
If enacted, this act will mark another important move in a series of recent initiatives by the NSA aimed at enhancing AI security. Just this past April, the agency released a cybersecurity information sheet detailing best practices for deploying secure AI systems. Following that, a month later, the NSA’s AI Security Center published joint guidance with entities such as the FBI and the Cybersecurity and Infrastructure Security Agency on securing data essential for training and operating AI systems.
However, the NSA’s increasing focus on AI has attracted scrutiny from privacy advocates, including the American Civil Liberties Union (ACLU). In April 2024, the organization filed a lawsuit against the agency under the Freedom of Information Act to compel the release of studies, roadmaps, and reports concerning its AI usage and potential impacts on civil liberties.
As the U.S. government ramps up its efforts to secure AI technologies, the implications of this legislative move extend beyond national security. It highlights a growing recognition of AI’s pivotal role in various sectors and the urgent need to safeguard these systems against evolving threats.















































