The Trump administration has shifted its strategy regarding state-level artificial intelligence (AI) laws, deciding to pause an executive order that would have mandated federal legal actions against states with their own AI regulations. This pivot marks a significant departure from the White House’s earlier aim for a unified national standard, which had been touted as a crucial framework for federal compliance tied to government funding.
Just days prior to this announcement, officials were contemplating the establishment of an AI Litigation Task Force and were cautioning that states with their own regulatory frameworks could face reduced allocations from federal broadband programs. This aggressive approach followed a decade-long effort to prevent state-specific AI laws, which was ultimately curtailed by a Senate vote of 99-1 against such measures. The halt in action suggests that the political and legal complexities surrounding federal preemption are proving to be more challenging than anticipated.
Shifting Political Dynamics in Washington
Within the administration, there has been notable resistance to a preemptive federal approach. Advocates for states’ rights warned that undermining traditional federalism would create contradictions within the party’s principles. Opinions from the industry are divided: while some major tech platforms support a unified regulatory framework, others argue that state-level regulations address gaps left by a non-responsive Congress.
The optics of targeting state AI laws have also raised concerns for the White House. Critics have pointed out that attacking companies like Anthropic for their support of California’s SB 53 risks framing the issue as a political dispute rather than a coherent policy dialogue. Additionally, the federal agencies responsible for enforcing such an executive order were bracing for the potential of drawn-out litigation with unpredictable outcomes.
State Governments Taking Initiative
Meanwhile, states are not waiting for federal guidance. Colorado has enacted a pioneering AI law requiring risk assessments for “high-risk” systems, with several provisions set to go into effect in 2026. In New York City, Local Law 144 mandates bias audits for automated employment decision-making tools, while Illinois’s Biometric Information Privacy Act has led to significant settlements for companies across various sectors.
California continues to advance a series of AI and algorithmic accountability bills, including provisions that echo the National Institute of Standards and Technology’s AI Risk Management Framework. Despite differences in their specifics, these laws share common themes, such as:
- Documenting model risks
- Impact assessments for sensitive applications
- Mechanisms for recourse when automated systems fail
Data from the Stanford AI Index highlights that legislative activity is on the rise in the U.S., with no comprehensive federal AI law established yet. This has led to governors and attorneys general effectively becoming the primary regulators of AI at the state level.
Legal Challenges to Federal Preemption
For the federal government to supersede state law, a clear federal statute is typically required, along with a direct conflict with state regulations. Currently, there is no overarching federal AI law in place that would support such an action. Pursuing litigation to invoke the Dormant Commerce Clause against state AI laws would likely face significant hurdles, as courts generally allow states to manage local harms that do not amount to overt protectionism.
Moreover, linking compliance to federal broadband funding poses its own risks. The Supreme Court’s coerciveness doctrine, established in the case of NFIB v. Sebelius, limits the federal government’s ability to impose conditions on states for existing funding. Threatening to withdraw financial support from programs like the NTIA’s BEAD program would likely meet swift challenges from both Democratic and Republican states.
As the landscape of AI governance becomes increasingly decentralized, compliance officers must prepare for a patchwork of state regulations. To navigate this complexity effectively, companies should focus on the most stringent requirements across jurisdictions. Key areas to prioritize include:
- Risk classification and management
- Systematic testing for safety and bias
- Impact assessments for high-stakes use cases
- User notification and avenues for recourse when automated systems impact rights or livelihoods
Aligning practices with the NIST AI Risk Management Framework can provide a robust foundation that aligns with multiple state regulations. Regardless of federal initiatives, organizations involved in sectors like hiring, healthcare, and education should expect increased scrutiny, audits, and public reporting standards to persist.
In summary, the administration’s decision to retreat from its aggressive stance on state AI laws underscores the intricate political and legal landscape surrounding AI governance in the United States. Without a definitive national law from Congress, states are poised to continue shaping AI regulations, and companies will need to adapt proactively to this evolving framework.
Nigeria Calls for Global Minerals Equity and AI Ethical Standards at G20 Summit
Policymakers Unveil Three Divergent Approaches to Regulating AI for Mental Health
University of International Business and Economics Launches AI and Data Science School to Meet National Goals
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies























































