All postsRegulation

Colorado just rewrote its AI Act. Your vendor due diligence still is not ready.

AI Clear Team7 min read

Yesterday, the Colorado legislature sent SB 26-189 to Governor Polis, completing a frantic repeal-and-replace of the state's original AI Act (SB 24-205). The rewrite shifts some technical documentation deadlines to January 1, 2027, but do not mistake a timeline adjustment for a reprieve. The mandate to exercise "reasonable care" to avoid algorithmic discrimination and the transparency requirements for insurers are still hitting the 2026 calendar. For organizations deploying AI in lending, underwriting, hiring, or any consequential decision-making, the transition to active regulatory supervision begins in 47 days.

And Colorado is not operating in isolation. The NCUA updated its AI resource hub in January 2026 with clear expectations that examiners will benchmark credit union AI governance against NIST AI RMF and COSO frameworks. The U.S. Treasury released its Financial Services AI Risk Management Framework (FS AI RMF) in February, formalizing evidence-based vendor AI assessment as a baseline expectation for financial institutions. NAIC guidance now classifies insurance underwriting AI as high-risk, with Colorado expanding those requirements to auto and health insurers.

The message from every direction is the same: if you buy AI, you own the risk.

What SB 26-189 actually changes

The new bill preserves the core structure of the original act. High-risk AI systems used in consequential decisions still require documented risk management policies, impact assessments, and consumer notification. But three changes matter for compliance planning.

First, the split timeline. Notice and anti-discrimination provisions remain on the 2026 calendar. Full technical documentation and developer disclosure requirements shift to January 1, 2027. This gives deployers breathing room on paperwork but not on the underlying obligation to avoid algorithmic discrimination.

Second, SB 26-189 introduces a Meaningful Human Review requirement. A trained individual must have the authority to override AI system outputs in consequential decisions. This is not just a policy change. It is an operational one that requires immediate staffing and training updates. Organizations need to identify who holds override authority, document the review process, and ensure those individuals actually understand the systems they are overseeing.

Third, the Attorney General's exclusive enforcement authority and 60-day cure period remain intact. Violations are still treated as unfair trade practices.

Why traditional vendor questionnaires fall short

Most procurement and compliance teams still rely on SOC 2 reports and generalized risk questionnaires when evaluating technology vendors. These instruments were designed for an era when vendor risk meant uptime guarantees and data encryption standards. They were never built to assess algorithmic discrimination, model drift, training data provenance, or explainability gaps.

Under SB 26-189, deployers must still conduct impact assessments and repeat them annually. Those assessments must document how each AI system manages risks of algorithmic discrimination. A checkbox questionnaire cannot produce that documentation.

The Treasury's FS AI RMF goes further, calling for independent testing, bias audits, hallucination measurement, and security testing. Not questionnaire responses. Evidence.

What a modern AI vendor due diligence process looks like

Organizations that are ahead of this curve share a few common practices.

First, they maintain a complete AI inventory. Every AI system, whether built internally or purchased from a vendor, is cataloged with its purpose, data inputs, decision outputs, and risk classification. This is the foundation that both SB 26-189 and NIST AI RMF require.

Second, they tier their vendors by risk. A chatbot that answers FAQs about branch hours is fundamentally different from a model that influences credit decisions or insurance pricing. Critical vendors whose AI affects consequential decisions require the most intensive diligence, including bias testing evidence, model documentation, and incident response protocols.

Third, they demand transparency from vendors. This means requesting model cards, documentation of training data and known limitations, and evidence of third-party audits or ratings. Vendors that cannot produce this documentation represent a quantifiable compliance gap.

Fourth, they build continuous monitoring into the relationship. AI systems change. A vendor that passes an initial assessment can deploy model updates that introduce new risks. Annual point-in-time reviews are the regulatory floor, not the ceiling.

The cost of waiting

Grant Thornton's 2026 AI Impact Survey found that 44% of insurance executives say governance or compliance challenges have contributed to AI projects failing or underperforming. Only 24% are very confident they could pass an independent AI governance review within 90 days. For credit unions and community financial institutions with smaller compliance teams, the gap is likely wider.

The passage of SB 26-189 does not reduce urgency. It clarifies it. The Meaningful Human Review requirement alone demands operational changes that take months to implement properly: identifying qualified reviewers, building override workflows, training staff on the systems they are supervising, and documenting the entire process. Waiting until January 2027 documentation deadlines approach means scrambling through a holiday quarter.

Where independent AI ratings fit

Part of the challenge is that compliance teams are being asked to evaluate AI systems they did not build and may not fully understand. This is where independent, standardized AI transparency ratings become a practical tool in the due diligence process.

AI Clear's public registry rates companies on AI disclosure quality using a rubric anchored to NIST AI RMF, ISO 42001, and the MIT AI Risk Repository. Ratings translate complex technical and governance assessments into letter grades (A+ through F) that procurement and compliance teams can use as one input in their vendor evaluation workflow. The registry is free to search at aiclear.org/registry.

For organizations building or updating their AI vendor due diligence frameworks under the new SB 26-189 requirements, a standardized external rating provides a consistent benchmark that complements internal assessments and reduces the burden on compliance teams that are already stretched thin.

Next steps

If your organization deploys AI in consequential decisions and you have not yet completed an AI vendor inventory, the time to start is now. Three concrete actions for this week:

  1. Review the AI Clear registry at aiclear.org/registry to see how your current vendors score on AI transparency and disclosure quality.
  2. Map your AI systems against SB 26-189's definition of high-risk deployer obligations, paying particular attention to the Meaningful Human Review requirement.
  3. Begin documenting your risk management policy using NIST AI RMF as your framework, which both the Colorado Act and NCUA examiners specifically reference as a recognized standard.

Colorado just rewrote its AI law. The organizations that treat that as a signal to start preparing, rather than a reason to wait, will be the ones that are ready when supervision begins.

See where your company stands

AI Clear scores companies on AI transparency. Search the registry or request your scorecard.