Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
May 15, 2024

Colorado Legislature Passes New AI Regulation — What You Need to Know Right Now

At a Glance

  • If signed into Colorado law, Colorado will take a risk-based approach to the regulation of AI. Developers and businesses using “high-risk” AI systems must take reasonable care to prevent algorithmic discrimination — defined as any unlawful differential treatment or impact that disfavors individuals or groups based on protected classifications.
  • If developers of high-risk AI systems are in compliance with the obligations enumerated in the bill, they are afforded a rebuttable presumption that they used reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. 
  • Businesses deploying high-risk AI systems must also use reasonable care to prevent algorithmic discrimination. Generally, deployers are required to implement a risk management policy and program to govern the deployment of the high-risk AI system. 
  • This is likely only the beginning of a wave of AI regulation at both state and federal levels. Both developers and deployers of AI systems should implement AI governance programs and risk management processes to manage, classify and document mitigation of AI risks. 

Colorado is close to becoming the first state to enact broad legislation to require developers and users of artificial intelligence (AI) systems to take steps to limit algorithmic discrimination. On May 8, 2024, the Colorado legislature passed Senate Bill 24-205, which is now with Colorado Gov. Jared Polis. If signed, the bill will come into effect on February 1, 2026.

Scope of Artificial Intelligence Systems

Like the EU AI Act, SB24-205 takes a risk-based approach to regulation of artificial intelligence. Developers and businesses using “high-risk” AI systems must take reasonable care to prevent algorithmic discrimination — defined as any unlawful differential treatment or impact that disfavors individuals or groups based on protected classifications (e.g., age, race, religion, sex or disability). A “high-risk AI system” is AI that is used to make decisions on the provision, denial, cost or terms to Colorado residents regarding: (A) educational enrollment or an education opportunity; (B) employment or an employment opportunity; (C) financial or lending services; (D) essential government services; (E) health care services; (F) housing; (G) insurance; or (H) legal services.

The bill states that “high-risk artificial intelligence systems” do not include those systems that are intended simply to perform narrow procedural tasks or to detect decision-making patterns or deviations from prior decision-making patterns. Other examples of technologies that the bill identifies as outside the scope of high-risk AI systems include: cybersecurity software (anti-malware, anti-virus and firewall software), spreadsheets, databases, calculators and video games. For generative AI applications, the bill states that artificial intelligence platforms that communicate in natural language with consumers and are used for the purpose of providing information or answering questions, and that are subject to an acceptable-use policy that prohibits generating content that is discriminatory or harmful, would not be deemed a “high-risk artificial intelligence system.”

Developer Obligations

A person who creates a high-risk AI system or who “intentionally and substantially modifies” an AI system in a way that results in any new reasonably foreseeable risk of algorithmic discrimination is considered a “developer” of a high-risk AI system. A business that deploys a high-risk AI system is considered a “deployer” rather than a “developer,” unless the business modifies the AI system in a manner outside of the intended uses and documentation provided by the upstream developer.

If they are in compliance with the obligations enumerated in the bill, developers are afforded a rebuttable presumption that they used reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. These obligations include maintaining documentation describing the purpose, intended uses, and benefits of the AI system, types of training data used and known/reasonably foreseeable limitations of the system. Developers must also evaluate the performance of the AI system, implement written-data governance measures to ensure the suitability of data sources, and mitigate any identified algorithmic discrimination risks.

A developer must also make certain information available to the public on its website, including the types of high-risk AI systems it makes and how it manages algorithmic discrimination risks. In the event that the developer identifies a risk of algorithmic discrimination after an AI system has been deployed, the developer is required to provide notification of such risks to the Colorado attorney general as well as to all known users of the system. The attorney general may also request to receive the full scope documentation provided to deployers for evaluation. 

Deployer Obligations

Businesses using high-risk AI systems (“deployers”) must also use reasonable care to prevent algorithmic discrimination. Generally, deployers are required to implement a risk management policy and program to govern the deployment of the high-risk AI system. Deployers should consider the National Institute of Standards and Technology (NIST) risk management framework as well as the specific characteristics of the deployed high-risk AI system when drafting their policies. 

Deployers must also conduct an annual impact assessment that details the purpose and intended use of the AI system, risk of algorithmic discrimination and steps to mitigate such risks, description of system inputs and outputs, system performance characteristics, transparency measures, and post-deployment monitoring. These impact assessments must be retained for three years. If a reasonably similar impact assessment is already required pursuant to another applicable law or regulation, then the deployer may rely on that assessment instead.

Deployers are also subject to certain transparency requirements. In particular, deployers must:

  • Inform individuals about any high-risk AI systems in use, including the purpose, nature of the consequential decision, and a plain language description of each system
  • Inform individuals of their right to opt out of the processing of personal data for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects
  • Inform individuals if an artificial intelligence system is a substantial factor in making an adverse decision, and provide:
    • Right to Explanation: A statement explaining the principal reason for the decision, data used in the decision and the data source
    • Right to Correct: An opportunity to correct any inaccurate personal data used by the high-risk AI system in the decision
    • Right to Appeal: An opportunity to appeal that decision for human review, if technically feasible (subject to certain exceptions)
  • Publish a statement on the deployer’s website regarding the types of high-risk artificial intelligence systems that are currently deployed; how the deployer manages risks of algorithmic discrimination; and the nature, source, and extent of information collected and used

The Colorado attorney general may request a copy of the deployer’s risk management policies, impact assessments and other applicable records.

Disclosure Requirement for Consumer Interaction With an Artificial Intelligence System

Although the bill primarily targets the risks associated with “high-risk” AI systems, there is a more broadly applicable transparency obligation for all AI systems that are intended to interact with Colorado residents. If a Colorado resident interacts with an AI system, the deployer must disclose to the individual that he or she is interacting with an AI system (unless it would be obvious to a reasonable person). 

Exceptions

With certain exceptions, developers and deployers of AI systems that are subject to review and approval by a federal agency (e.g., FDA, FAA, HHS, etc.) are exempted from the bill’s requirements. In addition, certain insurers and financial institutions are carved out of the scope of the law. Finally, a small-business exemption exists for businesses that employ fewer than 50 employees and meet certain other criteria. 

Enforcement

The Colorado AG is vested with exclusive enforcement authority. A violation constitutes an unfair trade practice. However, a violation of the requirements does not provide the basis for a private right of action. The Colorado AG is authorized to promulgate rules to implement many parts of the law.

Recommendations

While the position of Colorado Gov. Polis on SB24-205 is, as of yet, unclear, this is likely only the beginning of a wave of AI regulation at both the U.S. state and federal levels. Both developers and deployers of AI systems should implement AI governance programs and risk management processes to manage and document mitigation of AI risks. In particular, businesses using AI should begin to inventory their AI-related use cases and classify them based on risk. These businesses should also begin documenting the purpose of each AI system in use, identified risks, and measures adopted to provide transparency and mitigate risks.

Related Industries

Related Topics