Connect with us

Hi, what are you looking for?

AI Regulation

Trump’s Executive Order Targets State AI Regulations, Aims for National Framework

Trump’s executive order targets state AI regulations, directing the attorney general to challenge 38 laws that hinder innovation, particularly in AI safety and transparency.

President Donald Trump signed an executive order on December 11, 2025, aimed at overriding state-level artificial intelligence laws that the administration perceives as impediments to innovation in AI. This order emerges as 38 states enacted various AI regulations throughout 2025, addressing issues from AI-enabled stalking to manipulation of human behavior.

The new directive establishes a U.S. policy for a “minimally burdensome” national framework for AI. It calls upon the U.S. attorney general to form an AI litigation task force to contest state laws deemed inconsistent with this policy. Additionally, the secretary of commerce is tasked with identifying “onerous” state laws that conflict with federal guidelines, with provisions to withhold funding under the Broadband Equity Access and Deployment Program for states that maintain such laws. Notably, the executive order exempts laws related to child safety.

Executive orders serve as directives for federal agencies on the implementation of existing laws. This particular order instructs federal departments to act within the limits of their legal authorities, prompting significant reactions from various stakeholders. Major technology companies have advocated for federal intervention, arguing that the complexity of adhering to a patchwork of state regulations stifles innovation.

Supporters of state regulations argue that these laws are vital for public safety and economic well-being. Notable examples come from states like California, Colorado, Texas, and Utah, where regulations have been implemented to address algorithmic discrimination and ensure transparency in AI applications. A key focus is Colorado’s Consumer Protections for Artificial Intelligence, the first comprehensive state law in the U.S. regulating AI in critical areas like employment and healthcare. However, enforcement has faced delays as legislators assess its implications.

Colorado’s law mandates organizations that utilize “high-risk systems” to conduct impact assessments, inform consumers about the deployment of predictive AI in consequential decisions, and disclose the types of systems used. Similarly, Illinois is set to enforce a law starting January 1, 2026, which amends the Human Rights Act to classify discriminatory uses of AI as civil rights violations.

California’s Transparency in Frontier Artificial Intelligence Act introduces stringent requirements for the nation’s largest AI models, those that cost a minimum of $100 million to develop and require substantial computational power. The law aims to regulate the risks posed by these advanced AI systems, which can lead to catastrophic outcomes if mismanaged. Developers must disclose how they adhere to national and international standards, conduct risk assessments, and report critical incidents to the state’s Office of Emergency Services.

Texas has its own set of regulations through the Texas Responsible AI Governance Act, which restricts the use of AI for behavioral manipulation while providing safe harbor provisions to encourage compliance with responsible AI governance frameworks. Notably, it creates a “sandbox” environment for developers to safely test AI behavior. Meanwhile, Utah’s Artificial Intelligence Policy Act mandates disclosure of generative AI tools to consumers, ensuring that companies remain accountable for any consumer harms resulting from AI interactions.

In light of these developments, some state leaders, including Florida Governor Ron DeSantis, oppose federal efforts to negate state regulations, advocating instead for a Florida AI bill of rights that addresses the technology’s inherent risks. Concurrently, attorneys general from 38 states and territories have urged AI companies, including major players like Google and OpenAI, to rectify misleading outputs generated by AI systems that may foster overly trusting behaviors in users.

While the executive order presents a significant shift in the federal approach to AI regulation, its efficacy remains uncertain. Observers argue it may face legal challenges, as only Congress holds the power to override state laws. The final provision of the order instructs federal officials to propose legislation to reconcile these differences, suggesting a contentious path ahead as the debate over AI governance intensifies.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Corning and Meta begin a $6B partnership to expand optical cable production in North Carolina, boosting U.S. manufacturing and AI infrastructure growth.

AI Regulation

White House unveils AI framework to preempt state regulations, gaining bipartisan support from leaders like Mike Johnson and Ted Cruz to bolster industry growth.

AI Generative

Synthetic media's rise amid U.S.-Israel-Iran tensions fuels disinformation, complicating conflict narratives and undermining public trust in media accuracy

Top Stories

DeepSeek trains its latest AI model on Nvidia's banned Blackwell chips, revealing critical loopholes in U.S. export controls amid rising China-U.S. tech tensions

Top Stories

Mistral AI secures €1.7 billion funding, positioning itself as Europe's leading generative AI player with a valuation between $6 billion and $14 billion.

AI Cybersecurity

LeoLabs launches Delta, an AI-powered platform enhancing space security and threat detection with real-time monitoring for U.S. and Allied operators.

Top Stories

Florida AG James Uthmeier investigates OpenAI’s ChatGPT over chat logs connected to the FSU shooter, raising urgent concerns about AI's societal impact.

AI Regulation

Colorado becomes the first U.S. state to regulate high-risk AI in employment decisions with the Colorado Artificial Intelligence Act, effective February 1, 2026.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.