Connect with us

Hi, what are you looking for?

AI Regulation

Trump Administration Targets State AI Regulations with New Federal Guidance

Trump administration issues new guidance to limit state-level AI regulations, asserting federal dominance to boost U.S. competitiveness against China.

The Trump administration has unveiled new policy guidance aimed at shaping federal regulation of artificial intelligence (AI), signaling a renewed effort to override state-level laws that it views as hindrances to innovation. This guidance, released on Friday, follows a previous attempt last summer to limit state AI legislation and comes in the wake of a December executive order that established an AI Litigation Task Force to challenge state regulations deemed inconsistent with federal interests. The administration argues that a fragmented regulatory landscape across states stifles competitive development and hampers the U.S. in the global AI race, particularly with countries like China.

The framework emphasizes a light-touch federal approach that seeks to minimize regulation while asserting that state laws should not undermine national strategies for achieving global AI dominance. According to the guidelines, states should refrain from regulating AI development, which is described as an “inherently interstate phenomenon” with implications for foreign policy and national security. The administration also suggests that states cannot impose penalties on AI developers for the unlawful actions of third parties involving their models, addressing a contentious area of liability concerning AI misuse.

Nevertheless, certain provisions in the framework allow states to retain some regulatory powers. For instance, state laws addressing workforce upskilling with AI tools and educational applications can override federal regulations. The guidance does not preempt state zoning laws for the construction of data centers and permits states to use AI in public services, such as law enforcement and education, albeit with potentially varying implementations across the country. This raises concerns, especially regarding civil rights implications tied to AI in policing.

Historically, Congress attempted to prevent states from enacting AI regulations for a decade by withholding federal funding for broadband and AI infrastructure. That effort, however, faced significant backlash and was ultimately defeated, preserving states’ rights to legislate AI within their borders. Legal experts indicate that without a comprehensive federal AI law, states will continue to exercise their legislative powers, particularly in California, where recent state laws have advanced AI safety protocols.

California’s SB-53, effective January 1, mandates that AI model developers disclose their strategies for mitigating risks and report safety incidents, with penalties of up to $1 million for non-compliance. New York has enacted a similar law known as the RAISE Act, which imposes stricter reporting timelines and higher penalties. Both states have sought to fill the regulatory vacuum in a rapidly evolving sector that has largely evaded comprehensive oversight. However, some experts criticize these laws as insufficient, arguing that they do not impose adequate safety testing or third-party evaluations of AI systems.

The recent focus on AI governance comes as the Biden administration recognizes that enterprise customers and investors are increasingly prioritizing issues such as liability, cybersecurity, and governance in their dealings with AI companies. This growing emphasis may push companies to adopt more robust internal governance practices, particularly as they navigate legislation that could expose them to greater liability risks.

Despite the administration’s push for federal oversight, regulatory experts caution that the landscape remains complex and uncertain. Lily Li, a data protection lawyer, points out that existing federal laws, like HIPAA for healthcare, allow states to implement more stringent regulations. This dynamic complicates the Trump administration’s attempts to centralize AI governance, particularly in states that have already enacted their own measures.

In the context of these developments, discussions surrounding the balance between innovation and safety in AI are likely to intensify. Experts such as Gideon Futerman from the Center for AI Safety argue that while SB-53 represents a significant step toward transparency and accountability, the current regulatory framework still falls short of addressing the potential risks associated with AI technologies. As AI continues to evolve, balancing regulatory oversight with fostering innovation will remain a critical challenge for lawmakers at both the federal and state levels.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Government

Anthropic accuses Moonshot AI of 3.4M unauthorized exchanges with its Claude chatbot, prompting a global U.S. State Department campaign against IP theft.

Top Stories

DeepSeek's V4 open-source model undercuts GPT-5.5 and Claude Opus 4.7 with costs of $1.74 per million tokens, promising a disruptive shift in AI pricing...

AI Technology

Major tech giants, including Google and Amazon, are set to invest $3.7 trillion in AI infrastructure over five years, reshaping the workforce and economy.

Top Stories

Anthropic expands Claude Mythos AI into Japan amid U.S. government scrutiny over potential national security risks and AI misuse concerns.

AI Education

Los Angeles Unified School District bans digital devices through first grade and imposes screen limits in response to rising parental concerns over tech misuse.

AI Regulation

US designates Anthropic as a supply chain risk, prohibiting federal use of its AI, while the NSA actively employs its Mythos model for cybersecurity.

Top Stories

Cambricon surges to $423M in Q1 revenue with a 160% increase, outpacing Nvidia's dwindling market share in China, now below 60%.

AI Regulation

AI safety standards are at risk as Anthropic and OpenAI cut safety commitments amid competition, despite 80% of U.S. adults prioritizing regulation over innovation...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.