The Federal Trade Commission (FTC) is poised to navigate a complex legal landscape following an executive order from former President Trump aimed at potentially preempting state consumer protection laws governing artificial intelligence (AI). The order directs the FTC to clarify under what circumstances state laws that mandate changes to AI outputs may conflict with federal prohibitions against deceptive practices in commerce. The agency is required to issue a policy statement by March 11.
Section 5 of the FTC Act prohibits unfair or deceptive acts or practices in commerce. The executive order emphasizes the concept of deception as a misrepresentation or omission that leads a reasonable consumer to a detriment. For instance, false advertising misrepresents the quality or usefulness of a product, misleading consumers into making purchases based on inaccurate information.
The executive order contends that some states have enacted laws compelling firms to embed “ideological bias” in AI models, a claim that follows an earlier directive banning “woke AI” from the federal government. This order targets AI models that incorporate “social agendas,” which the administration argues distort the accuracy of outputs. The new order asserts that state laws enforcing such ideological biases could inherently mislead users about politically sensitive topics.
However, the assertion that such state laws exist is disputed. Critics note that instances of perceived “woke AI,” such as Gemini’s generation of an all-black roster of founding fathers, stem from companies’ internal training decisions rather than direct state laws. The First Amendment protects these design choices from regulation by both state and federal authorities if they are independently made.
To advance any preemption efforts, the FTC faces significant hurdles rooted in federalism. The U.S. Constitution’s Supremacy Clause stipulates that federal law supersedes conflicting state laws. The federal government can establish preemption either through explicit language in statutes or via implied preemption by occupying an entire regulatory field. However, Section 5 of the FTC Act contains neither explicit preemptive language nor occupies the entire realm of consumer protection, which remains under the jurisdiction of individual states.
This limitation leaves the FTC to explore conflict preemption. Federal law can preempt state law if compliance with both becomes impossible. While Section 5 prohibits deceptive acts, a state law requiring companies to engage in deception would lead to a conflict. Courts, however, typically resist preemption under a presumption against it, asserting that federal law does not override state law unless Congress clearly intended such a result. Section 5 is framed in general terms, lacking specific prescriptive rules that would signal a congressional intent to preempt.
Hence, for the FTC to legally preempt state laws concerning AI, it would need to embark on a lengthy rulemaking process that adheres to the Administrative Procedure Act and its own enhanced procedures under the Magnuson-Moss Act. This process includes issuing advance notices and proposed rules, allowing for public comments, and preparing comprehensive regulatory analyses. Such procedures could take years to complete, delaying any significant regulatory action.
The executive order specifies that the FTC must illustrate how state AI laws could potentially conflict with Section 5. The order references Colorado’s Artificial Intelligence Act, which has yet to be implemented, casting doubt on its practical effects. The act’s prohibition against “algorithmic discrimination” could be perceived as forcing AI models to yield false results to avoid differential treatment of protected groups. However, Colorado is likely to argue that the law aims to prevent AI from replicating biases present in historical training data.
Determining whether such laws could lead to consumer deception remains a complex challenge. The FTC would need to identify specific instances of conflict to justify preemption, a task complicated by varying interpretations of what constitutes deceptive practices.
Importantly, Section 5 only addresses deception “in or affecting commerce.” The FTC’s enforcement mandate does not extend into publishing or non-commercial outputs, suggesting that the agency’s oversight may be limited when it comes to subjective, opinion-based outputs generated by AI. As the executive order directs the FTC to assess ideological bias and truthful outputs, it may overstep the agency’s jurisdiction, which traditionally focuses on business practices rather than subjective opinions.
Ultimately, the FTC’s ability to preempt state laws regulating AI appears constrained by both legal doctrine and procedural complexities, requiring far more than a simple policy statement to enact significant change. The implications of any FTC actions will likely resonate through the tech sector and continue to shape the regulatory landscape of artificial intelligence in the United States.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































