The Colorado AI Act (SB24-205) takes effect on June 30, 2026. If your organization develops or deploys high-risk AI systems that touch consequential decisions in Colorado, you have roughly eight weeks to get your governance house in order.
This is not a distant regulatory possibility. It is a near-term operational requirement with real enforcement teeth: violations are treated as violations of Colorado's Consumer Protection Act, and the Attorney General has signaled readiness to act.
Here is what compliance actually requires, and where most organizations are falling short.
Who Is Covered
The law applies to two categories of organizations. Developers build or substantially modify AI systems. Deployers use those systems to make or substantially factor into consequential decisions about Colorado consumers. Consequential decisions include areas like employment, lending, insurance underwriting, housing, education, and access to essential services.
If your credit union uses an AI-powered loan decisioning tool, you are a deployer. If your insurance company relies on an algorithmic model for underwriting risk, you are a deployer. If the vendor who built those tools sells into Colorado, they are a developer.
Both sides carry obligations.
The Core Requirements for Deployers
Deployers face the most immediate operational burden. Starting June 30, you must have the following in place.
A risk management policy and program. This is not a one-page statement of principles. The law requires a documented program that specifies the principles, processes, and personnel your organization uses to identify, document, and mitigate risks of algorithmic discrimination. Think of it as a living governance framework, not a compliance artifact you file and forget.
An initial impact assessment. You must complete this within 90 days of the effective date, meaning no later than September 28, 2026. The assessment must cover the purpose and intended use of the AI system, the data it processes, the outputs it generates, the metrics used to evaluate performance and fairness, and the measures in place to mitigate discrimination risk.
Annual reviews. After the initial assessment, you are required to repeat impact assessments at least annually and within 90 days of any substantial modification to the system.
Consumer notification. Before a high-risk AI system makes or substantially contributes to a consequential decision about a consumer, you must notify them. This is a pre-decision disclosure requirement, not a post-hoc explanation.
Where Most Organizations Are Stuck
The most common failure point is not awareness. Most compliance teams know this law is coming. The breakdown happens at the inventory stage: organizations simply do not have a reliable picture of which AI systems they use, what decisions those systems influence, and whether those decisions qualify as consequential under the statute.
Without a complete AI inventory, you cannot scope your impact assessments. Without scoped assessments, you cannot build a credible risk management program. The whole compliance chain breaks at the first link.
This is especially acute in financial services and insurance, where AI is embedded across fraud detection, credit scoring, claims processing, and customer service. Many of these tools were adopted as vendor solutions without the documentation now required by law.
The Affirmative Defense Worth Building Toward
The law does include an important compliance incentive. Developers and deployers have an affirmative defense if they discover and cure violations within a reasonable timeframe, and if they comply with recognized risk management frameworks such as the NIST AI Risk Management Framework or ISO 42001.
This is significant. It means that organizations with structured, framework-aligned governance programs are not just better prepared operationally. They have a stronger legal position if something goes wrong.
The challenge is that aligning to these frameworks requires more than checking boxes. It requires documented evidence of ongoing risk assessment, transparency, and third-party accountability.
How to Prioritize the Next Eight Weeks
If you are starting now, focus on three things.
First, build your AI inventory. Catalog every AI system that touches decisions about Colorado consumers. Include vendor-provided tools, internally developed models, and any automated decision support systems.
Second, scope your impact assessments. For each system in your inventory, determine whether it qualifies as high-risk under the statute. Prioritize your initial assessments accordingly.
Third, benchmark against a recognized framework. The NIST AI RMF and ISO 42001 both provide structured approaches to AI risk management that satisfy the law's expectations and build toward the affirmative defense.
If you need an independent baseline, AI Clear's public registry provides transparency ratings for AI systems across 49 criteria aligned to NIST AI RMF, ISO 42001, and the MIT AI Risk Repository. Checking how your vendors score can accelerate your third-party due diligence and give you documented evidence of the diligence you performed.
The Bottom Line
The Colorado AI Act is the most operationally demanding state AI law to take effect in 2026, and it will not be the last. Organizations that build compliant governance programs now are not just meeting a single state deadline. They are building infrastructure that will serve them as additional state laws, federal guidance, and procurement requirements follow.
The clock is at eight weeks. Start with your inventory.
AI Clear is an independent AI transparency rating platform. Explore the public registry at aiclear.org or request a rating for your organization's AI systems.
See where your company stands
AI Clear scores companies on AI transparency. Search the registry or request your scorecard.