Connect with us

Hi, what are you looking for?

AI Regulation

White House Proposes Federal AI Regulation Framework to Preempt State Laws

White House urges Congress to establish a federal AI regulation framework to prevent state laws from hindering innovation, amid rising tensions over data privacy and costs.

By SEUNG MIN KIM and MATT O’BRIEN

WASHINGTON (AP) — The White House announced on Friday that it is urging Congress to “preempt state AI laws” deemed excessively burdensome, proposing a comprehensive framework for addressing artificial intelligence (AI) concerns while fostering growth and innovation in the sector. The legislative outline presents a set of guiding principles that emphasize the protection of children, the prevention of soaring electricity costs, the respect of intellectual property rights, the avoidance of censorship, and the education of Americans on AI usage.

House Republican leaders expressed swift support for the framework, indicating their readiness to collaborate across party lines to advance legislation. However, navigating the legislative landscape is expected to be challenging, particularly in a midterm election year, as divisions over AI persist among lawmakers.

This initiative arrives as state governments have begun implementing their own AI regulations while civil liberties and consumer rights advocates advocate for more stringent oversight of the technology. The industry, along with the White House, contends that a fragmented regulatory environment would hinder growth. Former President Trump previously signed an executive order in December aimed at preventing states from establishing their own rules.

“This was in response to a growing patchwork of 50 different state regulatory regimes that threaten to stifle innovation and jeopardize America’s lead in the AI race,” stated White House AI czar David Sacks in a social media post earlier this week. He emphasized the need to collaborate with Congress to translate the administration’s principles into federal legislation.

Despite the hurdles associated with passing comprehensive AI legislation, the framework seeks to identify common ground between AI-skeptical Republicans and Democrats by addressing widespread concerns, such as the potential risks that AI chatbots may pose to children and the rising electricity expenses associated with AI infrastructures. Neil Chilson, a former chief technologist for the Federal Trade Commission and current leader of AI policy at the Abundance Institute, noted, “It covers basically all the key sticking points I think that might stop an AI bill from moving through Congress.”

Four states—Colorado, California, Utah, and Texas—have already enacted laws regulating AI in the private sector. The White House is advocating for “strong federal leadership” to ensure public trust in how AI is utilized in daily life. These state laws include stipulations that limit the collection of personal information and enhance transparency requirements for companies.

As public backlash against data centers grows alongside rising energy prices, the White House has intensified pressure on AI companies and the power sector to take action. Earlier this month, AI firms were encouraged to sign voluntary pledges to construct their own power generation facilities.

However, the Trump administration clarified that it does not support a complete preemption of state regulatory powers over AI. It acknowledges the necessity of state enforcement of general laws aimed at protecting children, preventing fraud, and safeguarding consumers. The administration also maintains that local authorities should retain the right to decide the placement of data centers and other AI infrastructures, as well as how states procure AI tools for law enforcement or educational purposes.

Nevertheless, the framework asserts that states should not regulate AI development or penalize AI developers for the unlawful actions of third parties using their products. Moreover, it argues that state regulations should not impose undue burdens on lawful AI activities.

In addressing potential legal conflicts between artists and creators and tech companies that have utilized substantial quantities of copyrighted works to train AI systems, the framework recommends against intervention. It indicates that the administration “believes that training of AI models on copyrighted material does not violate copyright laws,” although it acknowledges the existence of contrary arguments and supports allowing courts to adjudicate these disputes.

The ongoing legal landscape features numerous lawsuits from writers, publishers, visual artists, and music record labels, with judges generally favoring AI developers in permitting “fair use” of copyrighted materials for generating new content. However, concerns remain regarding the methods by which these materials were obtained. A federal judge recently approved a $1.5 billion settlement between the AI company Anthropic and authors alleging that nearly half a million books were illegally pirated to train its chatbot.

This latest move by the White House reflects a growing recognition of the need for a cohesive AI regulatory framework at the federal level, as states continue to pursue their own divergent paths. As this conversation unfolds, the implications for innovation, consumer protections, and the overall trajectory of AI regulation in the U.S. remain significant.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Education

PennWest’s Mark Lennon secures a Fulbright grant to research AI's impact on regional higher education in Brazil, aiming to boost enrollment and global competitiveness.

AI Regulation

New York's upcoming AI legislation mandates explicit consent for using models' likenesses, reshaping digital advertising and protecting rights in the fashion industry.

Top Stories

Anthropic expands Claude Mythos AI into Japan amid U.S. government scrutiny over potential national security risks and AI misuse concerns.

AI Regulation

White House intervenes to halt Anthropic's expansion of AI model Mythos, citing national security risks and lack of formal regulations in AI governance

AI Regulation

US designates Anthropic as a supply chain risk, prohibiting federal use of its AI, while the NSA actively employs its Mythos model for cybersecurity.

AI Regulation

Trump administration seeks federal AI regulation to preempt state laws, proposing a national standard as states introduce 1,200 AI bills this year.

Top Stories

House Republicans challenge the 2021 HALT Drunk Driving Act's mandate for impaired driving tech in new cars, raising privacy concerns and risking a 2027...

AI Generative

Consumers are rejecting AI-generated content, with 54% experiencing "AI fatigue" and demanding proof of authenticity, warns MyTSV's new report.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.