最新
March 03, 2022
Artificial Intelligence Briefing: Feds Tackle AI Issues
The federal government continues to weigh in on artificial intelligence, algorithmic fairness and related issues. In the latest artificial intelligence briefing, we analyze recent events in Washington, D.C., and also track important developments in Colorado regarding its sweeping law restricting insurers’ use of automated decision-making.
Regulatory and Legislative Developments
- The Colorado Division of Insurance held its first stakeholder session on SB 21-169, the state’s sweeping unfair discrimination law. The February 17 session kicked off the stakeholder process required by the law, which restricts insurers' use of external consumer data, algorithms and predictive models — and which may become a blueprint for similar laws in other states. The session, which was attended by 300 interested parties, provided an early glimpse into how the law would be implemented and underscored a few major themes and takeaways that should be top of mind for insurers, including:
- Regulators and industry have a duty to ensure big data is used responsibly.
- If an insurer's algorithm or predictive model uses external consumer data and traditional underwriting data, the insurer may need to test all of these elements for possible bias. This point is a considerable cause for concern for insurers, and it raises a number of questions that will need to be addressed in upcoming sessions.
- A simple correlation to risk will not be sufficient justification if the underlying insurance practice also correlates to a protected class and negatively impacts that class. Colorado Insurance Commissioner Michael Conway said the regulations will describe the balancing test that insurers will need to perform; coming up with that test may be the most difficult and important part of the rulemaking process.
- The Consumer Finance and Protection Bureau (CFPB) outlined options designed to ensure that computer models used to help determine home valuations are accurate and fair. The CFPB is considering principles-based and prescriptive options to vet these systems — these may include a requirement that covered institutions establish control systems to ensure that their automatic valuation models (AVMs) comply with applicable nondiscrimination laws. The CFPB noted that “algorithmic systems — including AVMs — can replicate historical patterns of discrimination or introduce new forms of discrimination because of the way a model is designed, implemented, and used.”
- The Food and Drug Administration (FDA) held a webinar on digital health technologies (DHTs) for remote data acquisition. Hosted on February 10 by the FDA’s Center for Drug Evaluation and Research’s Small Business and Industry Assistance (SBIA), the webinar discussed the FDA’s December 2021 draft guidance on the use of DHTs in clinical investigations. The FDA spent significant time answering questions from attendees and attempting to clarify the agency’s position regarding the use of DHTs during clinical investigations. The FDA acknowledged the benefits of using DHTs in clinical investigations, which can be used to record and collect data directly from clinical trial participants to provide a better picture of how patients feel or function in their daily lives. Among other things, it also recognized the need to ensure the integrity of data collected through DHTs and to ensure that DHTs are fit for the purpose of a clinical investigation. The FDA is currently soliciting comments on the draft guidance that are due by March 22, 2022.
- The House Energy and Commerce Committee’s Subcommittee on Consumer Protection and Commerce held a March 1 hearing entitled Holding Big Tech Accountable: Legislation to Protect Online Users. The subcommittee discussed five bills, including the Algorithmic Accountability Act of 2022 that was reintroduced earlier this month. This bill would direct the Federal Trade Commission (FTC) to require impact assessments for bias, fairness, effectiveness and other factors when using “automated decision systems,” including those derived from artificial intelligence or machine learning. Mutale Nkonde (CEO, AI for the People U.S.) testified at the hearing and expressed support for the Algorithmic Accountability Act, emphasizing the need for greater transparency into how machine learning systems make determinations and noting that these decisions make lasting impacts on people’s lives, and primarily those in marginalized communities. While there is little to suggest this bill will become law in the near future, it provides a useful window into how lawmakers are approaching issues related to algorithmic fairness and automated decision-making.
- The U.S. Chamber of Commerce’s Commission on Artificial Intelligence, Competitiveness, Inclusion and Innovation issued an RFI seeking feedback on fairness and ethical concerns related to AI and global competitiveness. Topics covered in the RFI include regulatory frameworks, transparency and oversight audits, explainability and “human-in-the-loop” as safety mechanisms, transparency requirements and oversight of government use of AI, and fostering innovation. The AI Commission intends to use the responses to inform the development of bipartisan recommendations. Comments are due by March 25, 2022.