On March 17, 2026, the Colorado AI Policy Work Group, with backing from Governor Jared Polis, introduced a new legal framework intended to replace the existing Colorado Concerning Consumer Protections in Interactions with AI Systems, commonly known as the Colorado AI Act. This decision was rooted in a desire to streamline the regulatory landscape surrounding artificial intelligence, which is becoming increasingly complex in the face of rapid technological advancements. The Colorado AI Act, initially set to go into effect on February 1, 2026, was already recognized as one of the most comprehensive AI laws in the United States. Its effective date has since been postponed to June 30, 2026, allowing time for legislative reforms.
The proposed framework, titled Concerning the Use of Automated Decision Making Technology in Consequential Decisions (Proposed ADMT Framework), shifts the focus from stringent obligations of the previous law to an emphasis on transparency, recordkeeping, and consumer rights. This newly crafted framework aligns more closely with data privacy regulations rather than the governance-centric stipulations found in the EU AI Act. According to the new proposal, if enacted, the ADMT Framework would take effect on January 1, 2027, providing developers and deployers of automated decision-making technology until the end of 2026 for compliance adjustments.
One of the most significant changes under the Proposed ADMT Framework is the redefinition of applicable systems. While the Colorado AI Act employed a broad definition in accordance with the OECD AI Principles, the new proposal adopts terminology and standards prevalent in data privacy laws. Automated Decision Making Technology is now defined as “any technology that processes personal information and uses computation to generate output including predictions, recommendations, classifications, rankings, scores, or other information that is used to make, guide, or assist a decision concerning an individual.” The obligations will apply specifically to “Covered ADMT” that serves to “materially influence” consequential decisions, raising the bar from the previous “substantial factor” standard. This adjustment mandates that the technology’s output must significantly affect the decision outcome, as opposed to merely assisting in it.
The Proposed ADMT Framework maintains a focus on consequential decisions similar to those outlined in the Colorado AI Act, but notable alterations have been made. The new framework omits AI-related decisions concerning the provision or denial of legal services, a category that has been eliminated. Moreover, it clarifies the types of consequential decisions involved. For instance, housing decisions will encompass the lease or purchase of residential real estate, while insurance decisions will include underwriting, pricing, and claims adjudication that materially influence access to benefits. Essential government services, which primarily concern public benefits, will also fall under the framework.
Regarding obligations, the Proposed ADMT Framework significantly reduces requirements previously established by the Colorado AI Act. For example, it removes the need for developers to report known or reasonably foreseeable risks of algorithmic discrimination, conduct AI impact assessments, and implement a risk management policy. Instead, it specifies distinct obligations for developers and deployers of Covered ADMT. Developers are required to provide technical documentation detailing the intended uses, risks, and limitations of the technology. Conversely, deployers must inform consumers when a Covered ADMT is used in consequential decisions, as well as provide notifications regarding adverse outcomes that may arise due to the technology’s application.
The rulemaking process will be overseen by the Colorado Attorney General, who will formulate regulations to further clarify the obligations set forth in the Proposed ADMT Framework. The specific nature of notifications required for adverse outcomes will vary, depending on the types of consequential decisions involved, which include employment, housing, lending, and healthcare, as well as their interactions with existing federal or state laws.
In terms of enforcement, similar to the Colorado AI Act, the Proposed ADMT Framework does not allow for private lawsuits and is to be enforced by the Colorado Attorney General. Notably, before an enforcement action can be initiated, the Attorney General must provide written notice of any alleged violation, allowing the involved party a 90-day period to rectify the situation. If the violation is addressed within this timeframe, civil penalties will not be pursued, although injunctive relief may still be sought to prevent future infractions.
This proposed framework reflects Colorado’s evolving approach to AI regulation, prioritizing consumer rights and transparency while simultaneously aligning with broader data privacy trends. As the state prepares for the anticipated changes, stakeholders in the AI landscape will need to adapt their compliance strategies accordingly, mindful of the implications this new framework may hold for the future of automated decision-making technologies.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health

















































