On July 4, 2025, Congress passed the “One Big Beautiful Bill,” which notably excluded a proposed 10-year moratorium on state laws regulating Artificial Intelligence (AI). Without a federal framework, states are rapidly enacting their own regulations, leaving companies that deploy AI technologies in a state of confusion regarding compliance and potential liabilities. This article outlines key developments in state-level AI legislation as jurisdictions respond to the growing influence of AI across various sectors.
In a bid to establish a cohesive national policy, President Trump signed an executive order on December 11, 2025, titled “Ensuring a National Policy Framework for Artificial Intelligence.” The initiative aims to maintain the United States’ global edge in AI while fostering a minimally burdensome regulatory environment. This order also created an AI Litigation Task Force dedicated to contesting state laws that conflict with federal policies, even as states continue to introduce AI legislation with increasing urgency.
Among the first to implement comprehensive regulation is Colorado, which enacted the Colorado Artificial Intelligence Act in May 2024. Set to take effect on June 30, 2026, this law affects both “developers” and “deployers” of AI systems operating within the state. Its primary objective is to prevent “algorithmic discrimination” that adversely affects individuals based on protected classifications, including age, race, and disability. The Act mandates that companies develop an AI risk management policy and conduct AI impact assessments to identify and mitigate risks of discrimination. Additionally, businesses must disclose the use of high-risk AI to consumers, who are granted the right to appeal adverse decisions made by AI systems.
Following closely is the Utah Artificial Intelligence Policy Act, passed on May 1, 2024. This law is particularly focused on generative AI, reflecting the technology’s rise to prominence with the launch of ChatGPT in November 2022. It requires that consumers be informed when interacting with GenAI, particularly in sensitive transactions involving personal data. Furthermore, a separate law effective May 7, 2025, governs mental health chatbots, mandating explicit identification of AI in interactions and preventing the sale of identifiable health information without consent. Violations can result in penalties up to $2,500.
Texas, too, has taken steps to regulate AI through the Responsible Artificial Intelligence Governance Act (TRAIGA), passed in June 2025 and effective January 1, 2026. This legislation prohibits AI systems that may encourage physical harm, infringe on constitutional rights, or discriminate against protected classes. It also mandates that consumers are clearly informed when they are interacting with AI, with disclosure requirements designed to be straightforward and accessible. The Texas Attorney General has the authority to enforce the law, imposing fines of up to $12,000 for curable offenses and $200,000 for non-curable violations.
California has also made strides in AI regulation with the issuance of long-awaited Automated Decision-Making Technology Regulations under the California Consumer Privacy Act (CCPA) on September 23, 2025. Set to take effect on January 1, 2027, these regulations define automated decision-making technologies and require businesses to provide consumers with pre-use notices when significant decisions are made through AI. These notices must include information on the technology used, consumer rights to opt-out, and details regarding the categories of personal information analyzed. Companies must also conduct risk assessments to weigh the privacy risks of their AI systems against potential benefits.
As organizations continue to adopt AI to enhance operational efficiencies and competitive advantages, the need for robust AI governance frameworks becomes increasingly critical. The National Institute of Standards and Technology (NIST) has emphasized the uniqueness of risks posed by AI systems, introducing the Artificial Intelligence Risk Management Framework (AI RMF) in January 2023. This voluntary framework aims to guide organizations in establishing comprehensive risk management practices tailored to AI deployment.
With states independently navigating the challenges of AI regulation, companies must proactively develop governance and risk management strategies that not only comply with existing laws but also adapt to the evolving landscape of AI technology. As the discourse surrounding AI continues to expand, the balance between innovation and regulation will be crucial in shaping the future of AI applications across all sectors.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































