Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
June 08, 2022

Artificial Intelligence Briefing: CFPB Weighs in on Algorithmic Transparency

The federal government continues to take significant interest in organizations using artificial intelligence and algorithmic decision-making systems, as evidenced by three recent developments out of Washington, D.C.

Government Activity & Regulatory Developments

  • Consumer Financial Protection Bureau (CFPB) issues policy statement on credit decisions based on complex algorithms. On May 26, the CFPB issued Circular 2022-03, which addresses an important question about algorithmic decision-making: “When creditors make credit decisions based on complex algorithms that prevent creditors from accurately identifying the specific reasons for denying credit or taking other adverse actions, do these creditors need to comply with the Equal Credit Opportunity Act’s requirement to provide a statement of specific reasons to applicants against whom adverse action is taken?” The Circular says yes, compliance with ECOA and Regulation B is required even if complex algorithms (including AI and machine learning) make it difficult to accurately identify the specific reasons for taking the adverse action. Further, the Circular makes clear that those laws “do not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions.”
  • White House executive order calls for study of predictive algorithms used by law enforcement agencies. On May 25, President Biden signed an Executive Order on Advancing Effective, Accountable Policing and Criminal Justice Practices to Enhance Public Trust and Public Safety. Among other things, the order calls for the National Academy of Sciences, through its National Research Council, to contract for a “study of facial recognition technology, other technologies using biometric information, and predictive algorithms, with a particular focus on the use of such technologies and algorithms by law enforcement, that includes an assessment of how such technologies and algorithms are used, and any privacy, civil rights, civil liberties, accuracy, or disparate impact concerns raised by those technologies and algorithms or their manner of use.” In addition, the executive order directs the Attorney General, Secretary of Homeland Security and Director of the Office of Science and Technology Policy to ensure that the use of such technologies by law enforcement agencies “does not have a disparate impact on the basis of race, ethnicity, national origin, religion, sex (including sexual orientation and gender identity), or disability.”
  • Food and Drug Administration (FDA) issues letter on AI-driven radiological software. The FDA recently issued a letter to health care professionals clarifying the intended use and limitations of AI-driven radiological software used for detecting acute ischemic strokes. According to the FDA, while the AI software can improve workflow by prioritizing suspected stroke cases, it is not intended to be a replacement for hands-on clinical care or a physician’s professional judgment. The FDA’s letter reminded health care professionals that the AI software (a) only flags radiological exams with suspected findings and should never be used as a replacement for informed interpretation by an imaging physician; (b) does not provide definitive diagnostic information or suggest that lower-priority cases be ruled out and deleted from a physician’s reading queue; and (c) cannot definitively rule out the presence of an acute ischemic stroke. The FDA is continuing to analyze real-world data to track the performance of this software.
  • D.C. insurance regulator announces plan to examine potential bias in auto insurance. The District of Columbia Department of Insurance, Securities and Banking announced on June 7 that it is partnering with O’Neil Risk Consulting and Algorithmic Auditing (ORCAA) to “identify whether District residents may be experiencing unintentional bias in the underwriting and rating criteria used by automobile insurers.” (ORCAA also is advising the Colorado Division of Insurance in connection with insurers’ use of external consumer data, algorithms and predictive models.) The D.C. process will begin with a virtual public hearing on June 29 at 3 p.m. EDT.

What We’re Watching

  • Putative class action targets credit-based scores used to screen tenants. On May 25, a putative class action was filed in federal district court in Massachusetts against SafeRent Solutions, LLC, which offers tenant screening services to landlords, real estate agents, property managers and others. The lawsuit alleges that SafeRent’s credit-based scoring algorithm violates the Fair Housing Act and state law by discriminating against Black and Hispanic rental applicants who use federally funded housing vouchers. A property manager that based its rental decisions on SafeRent’s scores was also named as a defendant.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

Related Topics