Connect with us

Hi, what are you looking for?

AI Regulation

Trump Administration Proposes Federal AI Regulation to Preempt State Laws Amid Legislative Debate

Trump administration seeks federal AI regulation to preempt state laws, proposing a national standard as states introduce 1,200 AI bills this year.

The debate over the regulation of artificial intelligence (AI) is intensifying as the Trump administration pushes for a national framework designed to preempt state laws. This move follows two previous attempts at federal preemption that faltered, including a failed Senate vote last summer and a withdrawn provision from the National Defense Authorization Act. In a bid to consolidate power and quicken regulation, President Trump issued an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” and the White House recently released a federal AI policy framework outlining the administration’s legislative priorities.

The new framework proposes a “minimally burdensome” national standard and emphasizes protections for children, communities, and creators. It also explicitly calls for Congress to “preempt state AI laws” to establish a unified standard, arguing that a disjointed state-level approach hinders technological advancement and could put the U.S. behind competitors like China. White House AI Czar David Sacks has characterized the state landscape as chaotic, citing the introduction of “1,200” AI-related bills this year as evidence of “50 states going in 50 different directions.”

Nevertheless, critics argue that Sacks’ assessment oversimplifies the situation. An Institute for Family Studies survey indicates a growing consensus among states on the issues Americans want addressed regarding AI, prioritizing inquiry, humanity, transparency, safety, security, and accountability. The sheer number of bills introduced does not necessarily equate to significant regulatory change; last year, only 136 of 1,136 proposed bills became law, with just 26 regulating private AI use or development.

From 2023 to 2025, the IFS report found that only 276 AI-related laws were enacted, most of which address AI in broad terms, such as funding for AI research or clarifying existing laws to include AI-generated content. Only 33 of these laws specifically regulate the development or use of AI tools by private entities, countering claims of a burdensome regulatory environment.

The trend of inquiry has gained traction across the states, with 39 enacting laws that fund AI-related research. Initiatives include substantial appropriations to major universities and the formation of committees focused on AI policy. These developments reflect a desire for greater understanding of AI among policymakers and the public, especially as skepticism persists regarding the technology’s implications.

Moreover, many states are prioritizing the dignity and safety of their citizens by enacting laws to mitigate AI-related harms. Notable examples include the Texas Responsible Artificial Intelligence Governance Act, which prohibits AI products from promoting harmful activities, and the Tennessee ELVIS Act, which safeguards personal rights related to voice recognition. These legislative efforts illustrate a commitment to protecting citizens from the potential abuses of AI technologies.

However, some laws have drawn criticism for being overly broad, such as Colorado’s Artificial Intelligence Act, which imposes extensive reporting requirements for AI’s disparate impacts in critical fields like employment. This raises concerns about the potential for regulatory overreach, particularly as the Trump administration seeks a national standard that minimizes such burdens.

Transparency has also become a focal point, with at least 10 states passing laws that require AI developers to disclose safety protocols. California’s AI Transparency Act and New York’s RAISE Act exemplify this trend, aiming to bolster public trust by mandating that consumers are informed about the risks associated with AI technologies.

The quest for safety and security has led to the enactment of laws addressing various concerns, including the regulation of AI-generated deepfakes and the safety of AI chatbots, particularly in relation to minors. States like Kansas and Oregon have prohibited the use of AI products developed by foreign entities, reflecting a growing national security concern.

As states continue to innovate and establish their own regulations, it becomes increasingly clear that a federal standard is not only necessary for consistency but also essential for addressing the shared concerns of Americans around AI. Nonetheless, the specifics of such preemption are crucial; overly broad regulations could stifle states’ abilities to adapt to new challenges. The diversity in state responses could serve as valuable test cases, informing federal lawmakers on effective regulatory approaches.

While federal preemption may be on the horizon, it must create a regulatory floor that allows states to build upon it. The ongoing evolution of AI presents unique challenges that necessitate the participation and insight of all 50 states, transforming them into effective laboratories of democracy in the age of artificial intelligence.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

House Republicans challenge the 2021 HALT Drunk Driving Act's mandate for impaired driving tech in new cars, raising privacy concerns and risking a 2027...

AI Regulation

EPA unveils its AI Compliance Plan featuring 82 use cases, but only one high-impact application is fully deployed, revealing slow adoption challenges.

AI Marketing

Adobe acquires Semrush to boost AI-driven brand discovery, enhancing engagement capabilities as AI traffic to U.S. retail sites surges by 269% year-on-year.

AI Regulation

EU lawmakers failed to finalize the landmark AI Act after 12 hours of talks, with critical regulations set to impact European tech firms by...

AI Government

Pentagon partners with Google to enhance AI use in classified operations, shifting from Anthropic amid employee protests over civil liberties concerns.

AI Marketing

83% of U.S. digital media experts warn that rising AI-generated content is intensifying brand safety risks, prompting urgent calls for enhanced vetting and controls.

AI Generative

DeepSeek unveils V4 AI model with advanced reasoning and agentic capabilities, outperforming OpenAI's GPT-5.2 while integrating Huawei chips for enhanced autonomy.

AI Research

U.S. AI investments surge to $10B, driving deep learning and HCI innovations as companies like Google and OpenAI reshape career paths for tech professionals.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.