February 28, 2025

Virginia Legislature Passes High-Risk AI Regulation Bill

Virginia High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094)

At a Glance

  • Virginia's legislature has passed a comprehensive bill regulating high-risk AI systems, joining Colorado and other states addressing AI regulation.
  • The Act requires developers and deployers of high-risk AI systems to implement safeguards against algorithmic discrimination.
  • If signed, the Act will take effect July 1, 2026, with exclusive enforcement by the Virginia attorney general and civil penalties of up to $10,000 per willful violation.
  • While similar to Colorado's 2024 law, Virginia's approach includes more industry-friendly, limiting language, narrowing definitions of high-risk systems and imposing a "principal basis" test for decision making.

On February 20, 2025, the Virginia legislature passed the Virginia High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094), which now awaits Governor Glenn Youngkin's signature. The bill represents Virginia's entry into the growing field of state-level AI regulation.

Scope and Key Definitions

The Act focuses on "high-risk" AI systems that make or substantially influence consequential decisions affecting Virginia residents. A high-risk AI system is defined as one "specifically intended to autonomously make, or be a substantial factor in making," consequential decisions regarding:

  • Parole, probation, pardons or release from incarceration
  • Education enrollment or opportunities
  • Employment
  • Financial or lending services
  • Health care services
  • Housing
  • Insurance
  • Marital status
  • Legal services

The Act specifically excludes systems performing narrow procedural tasks, systems improving previously completed human activities, systems detecting decision patterns, and technologies like anti-fraud systems (without facial recognition), cybersecurity tools and AI-enabled video games.

Developer Obligations

Developers must exercise reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination by:

  • Providing deployers with documentation about a system’s intended uses, limitations and risks.
  • Disclosing evaluation methods for performance and algorithmic discrimination.
  • Making information available for deployers to complete impact assessments.
  • Implementing detection measures for generative AI systems creating synthetic content.

Developers complying with these requirements will benefit from a rebuttable presumption that they exercised reasonable care, reducing legal risk if they meet the standards outlined.

Deployer Obligations

Organizations deploying high-risk AI systems must:

  • Implement a risk management policy and program.
  • Complete impact assessments before deployment and after significant updates.
  • Provide consumers with clear disclosures about AI system use.
  • Offer explanation, correction and appeal rights for adverse decisions.
  • Maintain compliance documentation.

Comparison With Colorado's AI Act

Both Virginia and Colorado take a risk-based approach to AI regulation, with similar definitions of "high-risk" systems and "consequential decisions." Like Colorado, Virginia's act does not provide for a private right of action and vests enforcement authority solely with the state's attorney general.

However, Virginia's act includes important limiting language absent from Colorado's law:

  1. Virginia defines high-risk systems as those "specifically intended to autonomously" render consequential decisions.
  2. AI must provide the "principal basis" for consequential decisions to be subject to Virginia’s act — while Colorado’s law applies to AI that serves as “a basis” for consequential decisions.
  3. Virginia is developing separate legislation for AI systems used by public entities versus those in the private sector.

Enforcement and Exemptions

The Virginia attorney general has exclusive enforcement authority, with civil penalties up to $1,000 per violation or $10,000 for willful violations.

Important exemptions exist for certain:

  • Financial institutions under state or federal regulatory oversight.
  • Insurance companies regulated by the State Corporation Commission.
  • Health care covered entities and telehealth service providers.
  • Federal government contractors (with limitations).

State AI Regulation Landscape

Virginia joins a growing number of states addressing (or at least considering) AI regulation through various approaches:

  • Comprehensive frameworks (Colorado, Connecticut)
  • Targeted regulations (Illinois's employment focus, California's generative AI rules)
  • Sector-specific requirements or disclosure mandates

Recommendations

Organizations that develop or deploy AI systems affecting Virginia residents should:

  1. Inventory AI systems to identify potential "high-risk" applications.
  2. Update AI governance frameworks and risk management processes.
  3. Develop procedures for impact assessments and documentation.
  4. Implement consumer notice and appeal processes.
  5. Evaluate qualification for statutory exemptions.

Looking Ahead

The Virginia bill, if signed into law by Governor Youngkin, would continue the increasing trend toward state-level AI regulation, with states taking varied approaches based on their specific concerns. Organizations should be proactive in monitoring this pending legislation while preparing for potential compliance requirements, keeping abreast of developments in Virginia, Colorado and other states, as well as at the federal level.

Related Industries

Related Topics