President Donald Trump signed an executive order on December 11, 2025, aimed at overriding state-level artificial intelligence laws that the administration perceives as impediments to innovation in AI. This order emerges as 38 states enacted various AI regulations throughout 2025, addressing issues from AI-enabled stalking to manipulation of human behavior.
The new directive establishes a U.S. policy for a “minimally burdensome” national framework for AI. It calls upon the U.S. attorney general to form an AI litigation task force to contest state laws deemed inconsistent with this policy. Additionally, the secretary of commerce is tasked with identifying “onerous” state laws that conflict with federal guidelines, with provisions to withhold funding under the Broadband Equity Access and Deployment Program for states that maintain such laws. Notably, the executive order exempts laws related to child safety.
Executive orders serve as directives for federal agencies on the implementation of existing laws. This particular order instructs federal departments to act within the limits of their legal authorities, prompting significant reactions from various stakeholders. Major technology companies have advocated for federal intervention, arguing that the complexity of adhering to a patchwork of state regulations stifles innovation.
Supporters of state regulations argue that these laws are vital for public safety and economic well-being. Notable examples come from states like California, Colorado, Texas, and Utah, where regulations have been implemented to address algorithmic discrimination and ensure transparency in AI applications. A key focus is Colorado’s Consumer Protections for Artificial Intelligence, the first comprehensive state law in the U.S. regulating AI in critical areas like employment and healthcare. However, enforcement has faced delays as legislators assess its implications.
Colorado’s law mandates organizations that utilize “high-risk systems” to conduct impact assessments, inform consumers about the deployment of predictive AI in consequential decisions, and disclose the types of systems used. Similarly, Illinois is set to enforce a law starting January 1, 2026, which amends the Human Rights Act to classify discriminatory uses of AI as civil rights violations.
California’s Transparency in Frontier Artificial Intelligence Act introduces stringent requirements for the nation’s largest AI models, those that cost a minimum of $100 million to develop and require substantial computational power. The law aims to regulate the risks posed by these advanced AI systems, which can lead to catastrophic outcomes if mismanaged. Developers must disclose how they adhere to national and international standards, conduct risk assessments, and report critical incidents to the state’s Office of Emergency Services.
Texas has its own set of regulations through the Texas Responsible AI Governance Act, which restricts the use of AI for behavioral manipulation while providing safe harbor provisions to encourage compliance with responsible AI governance frameworks. Notably, it creates a “sandbox” environment for developers to safely test AI behavior. Meanwhile, Utah’s Artificial Intelligence Policy Act mandates disclosure of generative AI tools to consumers, ensuring that companies remain accountable for any consumer harms resulting from AI interactions.
In light of these developments, some state leaders, including Florida Governor Ron DeSantis, oppose federal efforts to negate state regulations, advocating instead for a Florida AI bill of rights that addresses the technology’s inherent risks. Concurrently, attorneys general from 38 states and territories have urged AI companies, including major players like Google and OpenAI, to rectify misleading outputs generated by AI systems that may foster overly trusting behaviors in users.
While the executive order presents a significant shift in the federal approach to AI regulation, its efficacy remains uncertain. Observers argue it may face legal challenges, as only Congress holds the power to override state laws. The final provision of the order instructs federal officials to propose legislation to reconcile these differences, suggesting a contentious path ahead as the debate over AI governance intensifies.
See also
Korea’s AI Basic Act Launches in 2026 Amid Industry Concerns Over Governance Readiness
Residents Urge Holyoke Council on Bridge Safety and Public Comment Reforms
RegTech Market Expected to Reach $33.36 Billion by 2032, Driven by AI Innovations
New York’s AI Safety Law Introduces Stricter Oversight, Diverging from California Framework






















































