The Colorado AI Act: America’s First Comprehensive AI Law

Media Centre

The Colorado AI Act: America’s First Comprehensive AI Law

27 May 2024

This month, while the EU AI Act receives its final vote and is about to be officially published, and enter into force, Colorado became the first state in the US to follow suit and enact comprehensive legislation regulating artificial intelligence, with the signing of the Colorado AI Act.

This landmark act aims to protect consumers from potential harms associated with AI systems, particularly those making consequential decisions. The new act mandates stringent requirements for developers and deployers of high-risk AI systems to prevent algorithmic discrimination and ensure transparency.


Scope of Application

Colorado’s AI Act applies to developers and deployers of high-risk AI systems. These terms are defined in the act in the following ways:

  • High-Risk Artificial Intelligence Systems means any AI system that, when deployed, makes, or is a substantial factor in making, a consequential decision.
    • Substantial factor means factor generated by an AI system that is used to assist in making, and is capable of altering the outcome of, a consequential decision.
    • Consequential decision means any decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) Education; (b) Employment; (c) Financial or lending services; (d) Essential government services; (e) Healthcare service; (f) Housing; (g) Insurance; or (h) Legal services.


The act excludes from the definition of a “high-risk AI system” those systems performing narrow tasks, detecting decision-making patterns without replacing human judgment, and various technologies like anti-fraud, anti-malware, calculators, and web hosting, unless they impact significant decisions. Interestingly, the act specifically excludes technologies that “communicate with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful“, which seems to cover popular generative AI tools like generative AI chat engines (e.g., ChatGPT, Gemini).

  • Developer means any person doing business in Colorado that develops, or intentionally and substantially modifies an AI system.
  • Deployer means any person doing business in Colorado that deploys (uses) a high-risk AI system.


Developer Duties

Under the Colorado AI Act, developers are required to “use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system.

“Algorithmic discrimination” is defined as any condition where the use of an AI system results in unlawful differential treatment or impact that disfavors an individual or group of individuals based on their actual or perceived protected class such as age, disability, genetic information, language proficiency, race, religion, sex, etc.

To protect themselves in potential enforcement actions brought by the Attorney General, developers can establish a rebuttable presumption of having met the “reasonable care” standard by complying with these requirements:

  • Disclosures to other developers and deployers: Developers must provide documentation to deployers or other developers, which includes:
    • A general statement describing foreseeable uses and known harmful or inappropriate uses.
    • High-level summaries of the data used to train the system.
    • Known or foreseeable limitations and risks of algorithmic discrimination.
    • The system’s purpose, intended benefits, and uses.
    • Information necessary for deployers to comply with their legal obligations.
    • Documentation on system evaluation for performance and mitigation of algorithmic discrimination, data governance measures, intended outputs, risk mitigation measures, and guidance on proper use, misuse, and monitoring.


  • Public disclosure: Developers must provide a public summary of the types of high-risk AI systems developed or modified and the management of risks associated with algorithmic discrimination. This summary must be updated as necessary and no later than 90 days after substantial modifications to any high-risk AI system.


  • Disclosure to Attorney General and deployers: Developers must disclose known or reasonably foreseeable risks of algorithmic discrimination to the Attorney General and all known deployers or other developers without unreasonable delay, but no later than 90 days after discovering the risk or receiving a credible report of algorithmic discrimination.


Deployer Duties

A similar duty of care is also imposed on deployers of high-risk AI systems, regarding which deployers can also establish a rebuttable presumption by fulfilling the following requirements:

  • Risk management policy and program: Deployers must implement a risk management policy and program to govern the deployment of high-risk AI systems. This policy must include principles, processes, and personnel dedicated to identifying, documenting, and mitigating risks of algorithmic discrimination. The program must be regularly reviewed and updated based on recognized risk management frameworks such as the Artificial Intelligence Risk Management Framework released by NIST and ISO/IEC 42001.


  • Impact assessments: Deployers, or third parties contracted by deployers, must complete impact assessments for high-risk AI systems before deployment, annually, and within 90 days of any substantial modifications. These assessments should cover the system’s purpose, intended use, risks of algorithmic discrimination, data processed, performance metrics, transparency measures, and post-deployment monitoring and safeguards.


  • Consumer notification and rights: Before making consequential decisions using high-risk AI systems, deployers must notify consumers, provide a statement disclosing information about the high-risk AI system in use, including its purpose and the nature of the decision, and, if applicable, provide information regarding the right to opt out of profiling under the Colorado Privacy Act. If a decision is adverse, deployers must also disclose the reasons for the decision, an opportunity to correct personal data used for making the decision, and an option to appeal the decision.


  • Public disclosure: Deployers must publicly disclose on their website a statement summarizing the types of high-risk AI systems they deploy, how they manage risks of algorithmic discrimination, and details about the information collected and used. This information must be kept current and updated as necessary.


  • Disclosure to Attorney General: If a high-risk AI system causes algorithmic discrimination, deployers must notify the Attorney General without undue delay and no later than 90 days after discovery.


Exemptions: Deployers with fewer than 50 full-time employees that do not use their own data to train the high-risk AI system and meet certain transparency requirements, are exempt from the risk management, impact assessment, and website disclosure requirements. They are still subject to a duty of care and must provide the relevant consumer notifications and rights.


Additional Requirements for Developers and Deployers

  • Deployers and consumer-facing developers of high-risk AI systems intended to interact with consumers must ensure that each consumer is informed that they are interacting with an AI system unless it would be obvious to a reasonable person.


  • Both developers and deployers must provide all required documentation under the act to the Attorney General upon request.


Enforcement and Rulemaking

The new act will take effect on 1 February 2026. The act does not provide for a private right of action and the Attorney General has exclusive authority to enforce violations under it, with a penalty of up to $20,000 per violation.

In enforcement actions, developers, deployers, or other persons can assert an affirmative defense if they:

  • Discover and cure a violation through feedback, adversarial testing or red-teaming, or internal review; and
  • Comply with recognized AI risk management frameworks (such as NIST or ISO), or any framework designated by the Attorney General.


The Attorney General is also empowered to promulgate rules to implement and enforce the Colorado AI Act, including guidelines for documentation, disclosures, risk management policies, impact assessments, rebuttable presumptions, and affirmative defenses.


Companies developing, providing, distributing and using AI systems should evaluate their exposure to this new regulatory framework, as well as to additional similar regulatory frameworks that are expected to be enacted in additional US states.

Feel free to contact us if you have any questions regarding this new law ‎and its practical implications.

Search by +