On December 11, 2025, President Trump signed Executive Order 14365, titled “Ensuring a National Policy Framework for Artificial Intelligence,” which aims to address various aspects of AI regulation in the United States. This includes targeting state regulations, exerting federal funding pressure, implementing a preemption strategy, establishing a national framework, outlining targeted exemptions, and shifting AI policy. In a subsequent move, the White House released a draft framework on March 20, 2026, as mandated by the Executive Order, signaling a significant step toward a cohesive federal approach.
In tandem with federal actions, Congress and state legislatures have been increasingly active in shaping AI policy. Notably, over **40 states** introduced approximately **250 bills** related to government use of AI in 2025. This momentum underscores a growing recognition of the need to regulate AI at multiple levels of government, though it raises concerns about the potential for a fragmented regulatory landscape.
The White House framework emphasizes the necessity for a federal AI policy to safeguard American rights, foster innovation, and avert a disjointed patchwork of state regulations that could undermine national competitiveness. Key recommendations include preempting state AI laws that impose excessive burdens, thereby ensuring a consistent national standard that respects state rights. However, the framework specifies that states would retain the authority to enforce laws related to general applicability against AI developers and users, particularly those aimed at protecting children, preventing fraud, and securing consumer interests.
White House adviser **David Sacks** indicated that Congress could enact comprehensive AI legislation within months, fulfilling the President’s commitment to establishing a national regulatory framework. In September 2025, Senator **Ted Cruz** (R-TX) introduced the “Strengthening Artificial Intelligence Normalization and Diffusion By Oversight and eXperimentation” (SANDBOX) Act, which proposes a regulatory sandbox allowing AI developers to seek modifications to regulations that might hinder innovation. This initiative aims to enhance transparency and facilitate safe AI usage.
Just prior to the release of the White House framework, Senator **Marsha Blackburn** (R-TN) unveiled the “TRUMP AMERICA AI Act,” which seeks to preempt state laws affecting AI development and establish a cohesive national standard. This bill would primarily focus on managing catastrophic risks associated with frontier AI technologies while empowering both federal and state attorneys general to pursue liability claims against AI system developers for various harms.
The same day the national framework was announced, a group of influential Republican legislators, including House Speaker **Mike Johnson** (R-LA) and House Majority Leader **Steve Scalise** (R-LA), committed to bipartisan collaboration to create a national framework that supports the growth of AI while ensuring protections for American families. This collective commitment underscores the urgency and importance of establishing clear regulations amid rapidly advancing technologies.
Meanwhile, states like **California**, **New York**, and **Texas** have already enacted comprehensive laws focused on AI transparency and consumer safety. In 2024, **Colorado** and **Utah** took significant steps by implementing legislation concerning AI usage, with Colorado delaying the implementation of its AI Act on algorithmic discrimination until June 2025. Utah amended its laws to clarify their applicability to regulated entities, thereby enhancing consumer protections in AI interactions, particularly in mental health contexts.
The National Governors Association (NGA) has responded to the evolving AI landscape by launching a “Working Group on AI & Future of Work,” which consists of advisors from bipartisan NGA member states. This group meets regularly to share best practices and address common challenges associated with AI. Their report, scheduled for release in November 2026, will encompass key elements such as descriptions of AI technologies, a survey of current policies, and recommendations for governors to lead AI innovation within their states.
As technology companies navigate this complex regulatory environment, their core priorities include advocating for federal preemption over state laws, establishing a risk-based regulatory approach, and ensuring regulatory certainty to foster investment. However, there is no unified industry stance; while major tech firms often support some regulation, startups typically prefer minimal oversight to maintain their competitive edge. The debate also extends to the realm of open-source versus closed-source AI, with differing opinions regarding the safety implications of making powerful models publicly accessible.
Looking ahead, the challenge remains whether Congress will act to override state laws amid a politically charged environment marked by an upcoming election year. With lobbying efforts on AI surging—over **640 companies** engaged at the federal level in 2024, marking a **141% increase** from the previous year—those invested in AI must participate actively in shaping the policies that will govern its future.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































