最新
June 08, 2023
Artificial Intelligence: EEOC Addresses Employer Liability When Using AI in Selection Procedures
The EEOC releases a technical assistance document exploring employers’ Title VII liability when incorporating AI tools and automated systems in employment selection procedures, and a new Texas district court rule prevents attorneys’ unchecked use of AI in preparing legal documents — we’re exploring these developments and more in our latest briefing.
Regulatory and Legislative Developments
- EEOC releases technical assistance. The Equal Employment Opportunity Commission recently released a technical assistance document titled Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. The document discusses employers’ potential liability under Title VII when using AI tools and automated systems in employment selection procedures. Employers can be found liable for using such tools if they have a disparate impact on employees/applicants based on protected characteristics, even if the tools are designed or administered by another entity, such as a software vendor. Employers can use the four-fifths rule to draw preliminary inferences and prompt further assessment of a tool, but compliance with the rule is not necessarily sufficient to show that a tool is lawful under Title VII. Under the four-fifths rule, a selection procedure could be found to have a disparate impact if the selection rate of a protected group is less than 80% of the rate of the non-protected group.
- Court implements rule regarding AI. District Judge Brantley Starr of the United States District Court for the Northern District of Texas has implemented a new rule requiring attorneys to certify that they haven't used AI for preparing their legal documents or, if used, the output has been cross-verified by a human. Judge Starr cautioned that AI platforms are prone to inaccuracies, biases and “hallucinations,” creating fictional content including quotes and citations. The rule notes that unlike human attorneys, AI systems have no commitment to a client, the rule of law or truth, functioning purely on preset programming. The rule underscores the legal system's growing wariness about the unchecked use of AI.
- Colorado Division of Insurance releases revised governance regulation On May 26, 2023, the Colorado Division of Insurance released a revised draft regulation implementing SB21-169 (codified as Colo. Rev. Stat. § 10-3-1104.9) with respect to life insurers only. The draft regulation sets forth governance and risk management framework requirements for life insurers that use external consumer data and information sources (ECDIS) and algorithms and predictive models that employ ECDIS. It adopts a risk-based approach and is less onerous than the initial draft but would still entail a significant compliance lift for most insurers.
- California Department of Insurance issues AI survey. On May 25, 2023, the California Department of Insurance issued a voluntary survey regarding the use of automated decision tools to selected insurance companies and groups. The survey covers insurers' use of such tools (which include big data, artificial intelligence and machine learning models) in connection with rating, underwriting, claims, fraud detection and marketing, and seeks information regarding governance, risk management and other controls. Survey responses are due by July 24 and may inform the Department's development of regulatory guidance or requirements.