The Biden administration is moving forward with plans to establish a national framework for artificial intelligence (AI), aimed at ensuring U.S. leadership in the rapidly evolving technology landscape. Key initiatives include the AI Action Plan and multiple executive orders designed to remove barriers and accelerate the development of AI infrastructure and data centers. This push comes as federal agencies, such as the Federal Trade Commission, increase efforts to regulate misleading AI claims and practices.
Drew Bagley, vice president and counsel for privacy and cyber policy at CrowdStrike, emphasized that the establishment of a national framework is essential to avoid the fragmented regulatory environment that characterized the early days of privacy legislation. “The federal government didn’t take the lead there, and that resulted in a heterogenous regulatory landscape,” he said. The administration aims to prevent similar pitfalls with AI through a cohesive framework.
While the proposed framework is still in development, experts suggest it will focus on critical aspects of data management and security. Bagley noted that the framework is likely to encourage innovation while safeguarding data integrity. “It will almost surely articulate guidelines around what is going on with the data and how data that’s leaving your enterprise is being protected against threats,” he stated. This focus on cybersecurity will be crucial, especially given the rapid pace at which cyber threats evolve.
However, the responsibility for actualizing the framework largely falls on Congress. “The administration understands that a lot of that heavy lifting actually has to be done by Congress,” said another expert, referring to the necessity for legislative action to solidify any proposed guidelines. The administration can offer guidance and policy priorities, but the legal details will require Congressional approval.
In anticipation of future compliance requirements, federal agencies can take proactive steps to prepare their operations for the impending regulatory landscape. Hagemann highlighted the existing NIST AI Risk Management Framework as a valuable resource for agencies to assess their risk management strategies. “It offers a pretty good metric and rubric for assessing whether or not you as an organization are doing your best due diligence,” he explained. Agencies not yet aligned with this framework are encouraged to expedite their efforts to meet these standards.
Furthermore, agencies are advised to enhance their data management practices, viewing them through the lens of a governance problem. Bagley remarked, “Any sort of AI framework will naturally have something to say about data governance, and those principles are pretty much tried, true and tested.” While the challenges associated with AI may differ in speed and scale, the foundational principles of data governance remain consistent.
Privacy will be a significant focus of the emerging framework, with many agencies already prioritizing this aspect. Hagemann noted that maintaining a strong emphasis on privacy will help agencies align themselves with the forthcoming regulatory structure. “They will be at least in general alignment with everything that’s going to be outlined in that framework,” he said.
The administration’s messaging around AI has been clear: agencies must leverage new technologies safely and effectively. Hagemann pointed out that potential policy adjustments related to procurement could facilitate this goal. “There’s a lot of commercial off-the-shelf models and systems that will probably serve a lot of purposes within the federal government perfectly fine,” he stated. Adjusting procurement rules could enable quicker adoption of existing technologies, reducing the need for agencies to develop solutions independently.
As federal agencies increasingly integrate AI into their operations, there is potential for policies that encourage the use of AI for repeatable and automatable tasks. Bagley indicated that the expectation to utilize AI could soon become a standard practice in government operations. This aligns with the administration’s broader aim of enhancing efficiency through AI adoption.
Emerging policy discussions will likely also address the intersection of AI and cybersecurity. Bagley warned that adversaries are evolving their tactics rapidly, moving at “machine speed.” He cited findings from CrowdStrike’s 2026 Global Threat Report, which revealed that the average time for an adversary to transition to another system after infiltrating one has decreased to 27 minutes. This urgency underscores the necessity for AI-powered cyber defenses at the policy level.
As agencies navigate these developments, proactive exploration of AI solutions will be essential for safeguarding data and systems. The anticipated national framework for AI represents an opportunity for federal leaders to adopt best practices and enhance operational resilience in an increasingly complex technological landscape.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































