DTC AI TRANSPARENCY REPORT

How DTC brands disclose their AI practices

We researched public brand pages, privacy policies, and the Meta Ads Library for 0 DTC brands and scored each on a five-domain rubric. The average score is 0/100.

No brands scored yet. Check back soon.

How to read this report

What we measured

Five domains of public AI transparency: AI Disclosure, Data & Consent, Content Authenticity, Pricing & Personalization, and Accountability. Each scored 0–20 for a total of 0–100.

What we did NOT measure

Internal AI practices, model performance, security posture, or ethical AI commitments that are not publicly documented. A low score means low public disclosure, not necessarily poor practice.

How to interpret a grade

  • A (80–100): Comprehensive public AI disclosure across most domains
  • B (60–79): Meaningful disclosure with some gaps
  • C (40–59): Partial disclosure — usually limited to privacy policy
  • D (20–39): Minimal disclosure — generic language only
  • F (0–19): No meaningful AI transparency found

Why disclosure matters

The EU AI Act, Colorado AI Act, and emerging US state legislation increasingly require public disclosure of AI use. Procurement teams and investors use AI transparency as a vendor risk signal. Public disclosure is the floor, not the ceiling.

Limitations

This report should not be interpreted as: a complete evaluation of AI practices, a guarantee of safety, or an endorsement. We score only what is publicly disclosed. Brands with low scores may have robust internal AI governance that is not publicly visible.