The Biden administration has proposed a comprehensive framework for regulating artificial intelligence (AI) in the United States, emphasizing children’s online safety, intellectual property, and the need for federal leadership in the evolving AI landscape. Released on March 20, the policy recommendations aim to address public concerns about the impact of AI on daily life while advocating for a preemption of state laws that could undermine innovation and competitiveness.
The proposal is part of a broader strategy to establish a cohesive national approach to AI, particularly as lawmakers consider their own initiatives. The White House coordinated closely with Senator Marsha Blackburn, R-Tenn., who unveiled a similar AI framework shortly before, highlighting shared priorities such as children’s safety and data privacy.
In a press release, the administration underscored the necessity of a national AI framework, stating that “some Americans feel uncertain about how this transformative technology will affect issues they care about.” It emphasized that addressing concerns related to children’s safety requires federal leadership to build trust in AI technologies.
Key areas addressed in the proposal include AI infrastructure, free speech, censorship, and the regulation of AI systems. The recommendations were mandated by an executive order issued in December 2025 that aimed to preempt state AI regulations. White House Special Advisor for AI and Crypto, David Sacks, and Director of the Office of Science and Technology Policy, Michael Kratsios, are leading the initiative, which seeks to lay the groundwork for a “uniform” regulatory framework.
Among the administration’s primary concerns is the protection of children online, a focus that has been consistent since the inception of the executive order. The proposed regulations highlight the importance of privacy and data security, advocating for the affirmation that existing child privacy laws should apply to AI systems. This includes limiting data collection for model training and targeted advertising, alongside proposals for “robust tools” that allow parents to manage their children’s online experiences.
The proposal also emphasizes the need for age verification to ensure age-appropriate access to AI technologies. It suggests implementing “commercially reasonable, privacy protective, age assurance requirements,” such as parental attestations, mirroring recommendations from Blackburn’s framework. The phrase “likely to be accessed by minors” is mentioned, indicating the administration is open to establishing knowledge standards in the ongoing debate surrounding the Kids Online Safety Act.
A significant aspect of the recommendations concerns the preemption of state laws, with the administration urging Congress to avoid undermining state efforts to protect children. The White House is seeking clarity on the scope of federal authority, stating that any preemption should ensure that state regulations do not interfere with areas better suited for federal oversight. The proposal argues against allowing states to regulate AI development or to hold developers accountable for unlawful third-party use of their technologies.
In tandem with its preemption strategy, the administration has suggested a relaxed enforcement approach under the final framework, proposing the establishment of AI regulatory sandboxes to promote innovation. U.S. Senator Ted Cruz has already introduced a proposal for a two-year sandbox program that would exempt certain federal regulations affecting product development.
Rather than establishing a standalone AI regulatory body, the administration advocates for the involvement of existing regulatory bodies with relevant expertise, emphasizing the role of “industry-led standards” in supporting sector-specific AI development.
As the landscape of artificial intelligence continues to evolve, these recommendations signal a pivotal shift toward a federal strategy designed to balance innovation with the protection of public interests, particularly those of vulnerable populations such as children. The coming months will likely see intense discussions among lawmakers as they weigh these proposals against their own initiatives, shaping the future of AI regulation in the United States.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health






















































