Connect with us

Hi, what are you looking for?

AI Regulation

State AI Regulations Surge Amid Federal Inaction: Key Legislation from Colorado, Utah, and Texas

States are racing to enact AI regulations, with Colorado’s law mandating risk assessments and consumer rights to appeal adverse AI decisions, effective June 30, 2026.

On July 4, 2025, Congress passed the “One Big Beautiful Bill,” which notably excluded a proposed 10-year moratorium on state laws regulating Artificial Intelligence (AI). Without a federal framework, states are rapidly enacting their own regulations, leaving companies that deploy AI technologies in a state of confusion regarding compliance and potential liabilities. This article outlines key developments in state-level AI legislation as jurisdictions respond to the growing influence of AI across various sectors.

In a bid to establish a cohesive national policy, President Trump signed an executive order on December 11, 2025, titled “Ensuring a National Policy Framework for Artificial Intelligence.” The initiative aims to maintain the United States’ global edge in AI while fostering a minimally burdensome regulatory environment. This order also created an AI Litigation Task Force dedicated to contesting state laws that conflict with federal policies, even as states continue to introduce AI legislation with increasing urgency.

Among the first to implement comprehensive regulation is Colorado, which enacted the Colorado Artificial Intelligence Act in May 2024. Set to take effect on June 30, 2026, this law affects both “developers” and “deployers” of AI systems operating within the state. Its primary objective is to prevent “algorithmic discrimination” that adversely affects individuals based on protected classifications, including age, race, and disability. The Act mandates that companies develop an AI risk management policy and conduct AI impact assessments to identify and mitigate risks of discrimination. Additionally, businesses must disclose the use of high-risk AI to consumers, who are granted the right to appeal adverse decisions made by AI systems.

Following closely is the Utah Artificial Intelligence Policy Act, passed on May 1, 2024. This law is particularly focused on generative AI, reflecting the technology’s rise to prominence with the launch of ChatGPT in November 2022. It requires that consumers be informed when interacting with GenAI, particularly in sensitive transactions involving personal data. Furthermore, a separate law effective May 7, 2025, governs mental health chatbots, mandating explicit identification of AI in interactions and preventing the sale of identifiable health information without consent. Violations can result in penalties up to $2,500.

Texas, too, has taken steps to regulate AI through the Responsible Artificial Intelligence Governance Act (TRAIGA), passed in June 2025 and effective January 1, 2026. This legislation prohibits AI systems that may encourage physical harm, infringe on constitutional rights, or discriminate against protected classes. It also mandates that consumers are clearly informed when they are interacting with AI, with disclosure requirements designed to be straightforward and accessible. The Texas Attorney General has the authority to enforce the law, imposing fines of up to $12,000 for curable offenses and $200,000 for non-curable violations.

California has also made strides in AI regulation with the issuance of long-awaited Automated Decision-Making Technology Regulations under the California Consumer Privacy Act (CCPA) on September 23, 2025. Set to take effect on January 1, 2027, these regulations define automated decision-making technologies and require businesses to provide consumers with pre-use notices when significant decisions are made through AI. These notices must include information on the technology used, consumer rights to opt-out, and details regarding the categories of personal information analyzed. Companies must also conduct risk assessments to weigh the privacy risks of their AI systems against potential benefits.

As organizations continue to adopt AI to enhance operational efficiencies and competitive advantages, the need for robust AI governance frameworks becomes increasingly critical. The National Institute of Standards and Technology (NIST) has emphasized the uniqueness of risks posed by AI systems, introducing the Artificial Intelligence Risk Management Framework (AI RMF) in January 2023. This voluntary framework aims to guide organizations in establishing comprehensive risk management practices tailored to AI deployment.

With states independently navigating the challenges of AI regulation, companies must proactively develop governance and risk management strategies that not only comply with existing laws but also adapt to the evolving landscape of AI technology. As the discourse surrounding AI continues to expand, the balance between innovation and regulation will be crucial in shaping the future of AI applications across all sectors.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Government

US imposes 25% tariffs on Nvidia and AMD AI chips, channeling revenue from China sales directly to the treasury amid rising trade tensions.

Top Stories

Over 60% of US adults now initiate online tasks with AI platforms, fundamentally reshaping digital behavior and commerce interactions.

Top Stories

Midjourney unveils Niji 7, enhancing AI-generated anime art with vibrant colors and improved prompt interpretation, revolutionizing creative possibilities for artists.

AI Research

Organizations must strategically prepare for AI transformation by 2026, focusing on digital readiness and a hybrid cloud model to ensure successful integration.

AI Business

South African businesses face reputational and regulatory risks as Alkemi CEO warns that ethical AI governance is crucial for operational integrity and trust.

AI Technology

Aveva unveils four AI tools for its Unified Engineering platform, promising to accelerate project timelines and reduce costs by enhancing real-time collaboration and design...

Top Stories

OpenAI invests undisclosed millions in Merge Labs to develop non-invasive brain-computer interfaces, positioning it as a competitor to Neuralink.

AI Cybersecurity

94% of executives report AI as the key driver in cybersecurity transformation by 2026, with geopolitical risks now top concerns impacting strategies.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.