By SEUNG MIN KIM and MATT O’BRIEN
WASHINGTON (AP) — The White House announced on Friday that it is urging Congress to “preempt state AI laws” deemed excessively burdensome, proposing a comprehensive framework for addressing artificial intelligence (AI) concerns while fostering growth and innovation in the sector. The legislative outline presents a set of guiding principles that emphasize the protection of children, the prevention of soaring electricity costs, the respect of intellectual property rights, the avoidance of censorship, and the education of Americans on AI usage.
House Republican leaders expressed swift support for the framework, indicating their readiness to collaborate across party lines to advance legislation. However, navigating the legislative landscape is expected to be challenging, particularly in a midterm election year, as divisions over AI persist among lawmakers.
This initiative arrives as state governments have begun implementing their own AI regulations while civil liberties and consumer rights advocates advocate for more stringent oversight of the technology. The industry, along with the White House, contends that a fragmented regulatory environment would hinder growth. Former President Trump previously signed an executive order in December aimed at preventing states from establishing their own rules.
“This was in response to a growing patchwork of 50 different state regulatory regimes that threaten to stifle innovation and jeopardize America’s lead in the AI race,” stated White House AI czar David Sacks in a social media post earlier this week. He emphasized the need to collaborate with Congress to translate the administration’s principles into federal legislation.
Despite the hurdles associated with passing comprehensive AI legislation, the framework seeks to identify common ground between AI-skeptical Republicans and Democrats by addressing widespread concerns, such as the potential risks that AI chatbots may pose to children and the rising electricity expenses associated with AI infrastructures. Neil Chilson, a former chief technologist for the Federal Trade Commission and current leader of AI policy at the Abundance Institute, noted, “It covers basically all the key sticking points I think that might stop an AI bill from moving through Congress.”
Four states—Colorado, California, Utah, and Texas—have already enacted laws regulating AI in the private sector. The White House is advocating for “strong federal leadership” to ensure public trust in how AI is utilized in daily life. These state laws include stipulations that limit the collection of personal information and enhance transparency requirements for companies.
As public backlash against data centers grows alongside rising energy prices, the White House has intensified pressure on AI companies and the power sector to take action. Earlier this month, AI firms were encouraged to sign voluntary pledges to construct their own power generation facilities.
However, the Trump administration clarified that it does not support a complete preemption of state regulatory powers over AI. It acknowledges the necessity of state enforcement of general laws aimed at protecting children, preventing fraud, and safeguarding consumers. The administration also maintains that local authorities should retain the right to decide the placement of data centers and other AI infrastructures, as well as how states procure AI tools for law enforcement or educational purposes.
Nevertheless, the framework asserts that states should not regulate AI development or penalize AI developers for the unlawful actions of third parties using their products. Moreover, it argues that state regulations should not impose undue burdens on lawful AI activities.
In addressing potential legal conflicts between artists and creators and tech companies that have utilized substantial quantities of copyrighted works to train AI systems, the framework recommends against intervention. It indicates that the administration “believes that training of AI models on copyrighted material does not violate copyright laws,” although it acknowledges the existence of contrary arguments and supports allowing courts to adjudicate these disputes.
The ongoing legal landscape features numerous lawsuits from writers, publishers, visual artists, and music record labels, with judges generally favoring AI developers in permitting “fair use” of copyrighted materials for generating new content. However, concerns remain regarding the methods by which these materials were obtained. A federal judge recently approved a $1.5 billion settlement between the AI company Anthropic and authors alleging that nearly half a million books were illegally pirated to train its chatbot.
This latest move by the White House reflects a growing recognition of the need for a cohesive AI regulatory framework at the federal level, as states continue to pursue their own divergent paths. As this conversation unfolds, the implications for innovation, consumer protections, and the overall trajectory of AI regulation in the U.S. remain significant.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































