The Trump administration has unveiled new policy guidance aimed at shaping federal regulation of artificial intelligence (AI), signaling a renewed effort to override state-level laws that it views as hindrances to innovation. This guidance, released on Friday, follows a previous attempt last summer to limit state AI legislation and comes in the wake of a December executive order that established an AI Litigation Task Force to challenge state regulations deemed inconsistent with federal interests. The administration argues that a fragmented regulatory landscape across states stifles competitive development and hampers the U.S. in the global AI race, particularly with countries like China.
The framework emphasizes a light-touch federal approach that seeks to minimize regulation while asserting that state laws should not undermine national strategies for achieving global AI dominance. According to the guidelines, states should refrain from regulating AI development, which is described as an “inherently interstate phenomenon” with implications for foreign policy and national security. The administration also suggests that states cannot impose penalties on AI developers for the unlawful actions of third parties involving their models, addressing a contentious area of liability concerning AI misuse.
Nevertheless, certain provisions in the framework allow states to retain some regulatory powers. For instance, state laws addressing workforce upskilling with AI tools and educational applications can override federal regulations. The guidance does not preempt state zoning laws for the construction of data centers and permits states to use AI in public services, such as law enforcement and education, albeit with potentially varying implementations across the country. This raises concerns, especially regarding civil rights implications tied to AI in policing.
Historically, Congress attempted to prevent states from enacting AI regulations for a decade by withholding federal funding for broadband and AI infrastructure. That effort, however, faced significant backlash and was ultimately defeated, preserving states’ rights to legislate AI within their borders. Legal experts indicate that without a comprehensive federal AI law, states will continue to exercise their legislative powers, particularly in California, where recent state laws have advanced AI safety protocols.
California’s SB-53, effective January 1, mandates that AI model developers disclose their strategies for mitigating risks and report safety incidents, with penalties of up to $1 million for non-compliance. New York has enacted a similar law known as the RAISE Act, which imposes stricter reporting timelines and higher penalties. Both states have sought to fill the regulatory vacuum in a rapidly evolving sector that has largely evaded comprehensive oversight. However, some experts criticize these laws as insufficient, arguing that they do not impose adequate safety testing or third-party evaluations of AI systems.
The recent focus on AI governance comes as the Biden administration recognizes that enterprise customers and investors are increasingly prioritizing issues such as liability, cybersecurity, and governance in their dealings with AI companies. This growing emphasis may push companies to adopt more robust internal governance practices, particularly as they navigate legislation that could expose them to greater liability risks.
Despite the administration’s push for federal oversight, regulatory experts caution that the landscape remains complex and uncertain. Lily Li, a data protection lawyer, points out that existing federal laws, like HIPAA for healthcare, allow states to implement more stringent regulations. This dynamic complicates the Trump administration’s attempts to centralize AI governance, particularly in states that have already enacted their own measures.
In the context of these developments, discussions surrounding the balance between innovation and safety in AI are likely to intensify. Experts such as Gideon Futerman from the Center for AI Safety argue that while SB-53 represents a significant step toward transparency and accountability, the current regulatory framework still falls short of addressing the potential risks associated with AI technologies. As AI continues to evolve, balancing regulatory oversight with fostering innovation will remain a critical challenge for lawmakers at both the federal and state levels.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































