The debate over the regulation of artificial intelligence (AI) is intensifying as the Trump administration pushes for a national framework designed to preempt state laws. This move follows two previous attempts at federal preemption that faltered, including a failed Senate vote last summer and a withdrawn provision from the National Defense Authorization Act. In a bid to consolidate power and quicken regulation, President Trump issued an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” and the White House recently released a federal AI policy framework outlining the administration’s legislative priorities.
The new framework proposes a “minimally burdensome” national standard and emphasizes protections for children, communities, and creators. It also explicitly calls for Congress to “preempt state AI laws” to establish a unified standard, arguing that a disjointed state-level approach hinders technological advancement and could put the U.S. behind competitors like China. White House AI Czar David Sacks has characterized the state landscape as chaotic, citing the introduction of “1,200” AI-related bills this year as evidence of “50 states going in 50 different directions.”
Nevertheless, critics argue that Sacks’ assessment oversimplifies the situation. An Institute for Family Studies survey indicates a growing consensus among states on the issues Americans want addressed regarding AI, prioritizing inquiry, humanity, transparency, safety, security, and accountability. The sheer number of bills introduced does not necessarily equate to significant regulatory change; last year, only 136 of 1,136 proposed bills became law, with just 26 regulating private AI use or development.
From 2023 to 2025, the IFS report found that only 276 AI-related laws were enacted, most of which address AI in broad terms, such as funding for AI research or clarifying existing laws to include AI-generated content. Only 33 of these laws specifically regulate the development or use of AI tools by private entities, countering claims of a burdensome regulatory environment.
The trend of inquiry has gained traction across the states, with 39 enacting laws that fund AI-related research. Initiatives include substantial appropriations to major universities and the formation of committees focused on AI policy. These developments reflect a desire for greater understanding of AI among policymakers and the public, especially as skepticism persists regarding the technology’s implications.
Moreover, many states are prioritizing the dignity and safety of their citizens by enacting laws to mitigate AI-related harms. Notable examples include the Texas Responsible Artificial Intelligence Governance Act, which prohibits AI products from promoting harmful activities, and the Tennessee ELVIS Act, which safeguards personal rights related to voice recognition. These legislative efforts illustrate a commitment to protecting citizens from the potential abuses of AI technologies.
However, some laws have drawn criticism for being overly broad, such as Colorado’s Artificial Intelligence Act, which imposes extensive reporting requirements for AI’s disparate impacts in critical fields like employment. This raises concerns about the potential for regulatory overreach, particularly as the Trump administration seeks a national standard that minimizes such burdens.
Transparency has also become a focal point, with at least 10 states passing laws that require AI developers to disclose safety protocols. California’s AI Transparency Act and New York’s RAISE Act exemplify this trend, aiming to bolster public trust by mandating that consumers are informed about the risks associated with AI technologies.
The quest for safety and security has led to the enactment of laws addressing various concerns, including the regulation of AI-generated deepfakes and the safety of AI chatbots, particularly in relation to minors. States like Kansas and Oregon have prohibited the use of AI products developed by foreign entities, reflecting a growing national security concern.
As states continue to innovate and establish their own regulations, it becomes increasingly clear that a federal standard is not only necessary for consistency but also essential for addressing the shared concerns of Americans around AI. Nonetheless, the specifics of such preemption are crucial; overly broad regulations could stifle states’ abilities to adapt to new challenges. The diversity in state responses could serve as valuable test cases, informing federal lawmakers on effective regulatory approaches.
While federal preemption may be on the horizon, it must create a regulatory floor that allows states to build upon it. The ongoing evolution of AI presents unique challenges that necessitate the participation and insight of all 50 states, transforming them into effective laboratories of democracy in the age of artificial intelligence.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































