Colorado has become the first state in the U.S. to implement a comprehensive statute regulating artificial intelligence (AI) systems in employment decisions, focusing particularly on “high-risk” AI systems. The Colorado Artificial Intelligence Act (CAIA), set to take effect on February 1, 2026, aims to combat “algorithmic discrimination,” which refers to unlawful differential treatment against individuals based on protected classifications under state or federal laws. Enforcement of this law will be handled exclusively by the Colorado Attorney General, with no provision for private lawsuits.
At the heart of the CAIA is the definition of “high-risk AI systems,” which are those that play a significant role in making consequential decisions with legal or similarly important implications. Employment-related decisions are specifically noted in this categorization. The statute clarifies that AI-generated outputs may also qualify as a substantial factor in such decisions. However, systems designed for narrow tasks, such as those that simply detect anomalies without replacing human judgment, are excluded from this classification. Common technologies, such as anti-malware software and calculators, are also exempt unless they directly contribute to consequential decisions.
The law regulates both “developers,” who create or substantially modify AI systems, and “deployers,” who utilize these high-risk systems. Employers often fit into the deployer category, while those who significantly alter vendor AI tools may also be considered developers. Notably, ongoing model learning that is documented in initial impact assessments does not count as an intentional modification under the statute.
CAIA extends protections to “consumers,” defined as Colorado residents, which includes job applicants and employees. Smaller organizations with fewer than 50 full-time equivalent employees are exempt from certain requirements, such as risk management policies and public statements, provided they do not use their own data for training and adhere to other specified conditions. However, obligations such as providing pre-decision notices and adverse-action explanations remain intact.
Beginning February 1, 2026, both developers and deployers must exercise reasonable care to mitigate known or foreseeable risks of algorithmic discrimination. A rebuttable presumption of reasonable care applies if they meet statutory requirements and any rules established by the Attorney General. Deployers are tasked with implementing a lifecycle risk management policy aligned with established frameworks, conducting annual impact assessments, and separately reviewing each high-risk system to ensure it does not result in algorithmic discrimination. Moreover, they must notify consumers when AI systems are in use and provide explanations for any adverse decisions made.
Developers, on their part, must provide deployers with detailed documentation regarding potential harmful uses and discrimination risks associated with their AI systems. They are also required to publish and update public statements summarizing the high-risk systems they offer and promptly notify the Attorney General upon discovering that their system may have caused discrimination.
The CAIA categorizes violations as unfair or deceptive trade practices, enforceable solely by the Attorney General. The law allows for the disclosure of deployers’ risk policies and impact assessments to assess compliance, with protections in place to safeguard proprietary information. An affirmative defense is available if an organization identifies and rectifies a violation through established feedback mechanisms or compliance frameworks. Certain federal systems and organizations, including HIPAA-covered entities and regulated banks, are exempted but must demonstrate their eligibility for such exemptions.
Organizations are advised to start by inventorying their AI systems to determine which qualify as high-risk and to clarify their roles as either deployers or developers. Building an AI risk governance framework in line with established risk management frameworks, such as NIST AI RMF or ISO/IEC 42001, is critical. They should also prepare standardized templates for impact assessments and consumer notifications, while establishing protocols for reporting potential algorithmic discrimination to the Attorney General.
As the regulatory landscape for AI continues to evolve, organizations must stay informed about any rulemaking from the Attorney General that could impact compliance obligations. This law represents a significant step toward ensuring fairness and accountability in AI usage within the workplace, setting a precedent that may influence similar regulations in other states.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































