February 07, 2025

FTC Sends a Reminder to Facial Recognition Tech Companies to Do What They Say and Say What They Do

Enforcement Against False Claims of Error-Free Algorithms

At a Glance

  • In December 2024, the Federal Trade Commission (FTC) initiated action against IntelliVision Technologies Corp., alleging the company made false, misleading or unsubstantiated claims that its AI-powered facial recognition software was free of gender and racial bias.
  • The FTC alleged in its complaint that IntelliVision did not have data to support its statements that its software has one of the highest accuracy rates on the market and performs with zero gender or racial bias.
  • Additionally, the FTC highlighted in its complaint that IntelliVision did not train the facial recognition software as it claimed.
  • In mid-January 2025, the FTC issued a final decision and order, under which IntelliVision is subject to a consent order for 20 years that governs the representations it can make about its technology.

On December 3, 2024, the FTC filed a complaint against IntelliVision Technologies Corp., alleging that while the company claimed that its algorithm had one of the highest accuracy rates in the facial recognition technology market and performed with zero gender or racial bias, IntelliVision could not support such claims. On January 8, 2024, the FTC issued a decision confirming the entry of a 20-year consent agreement pursuant to which IntelliVision is, among other things: (i) prohibited from making false, misleading or unsubstantiated statements about its technology; (ii) required to support all representations about its technology with “competent and reliable testing”; and (iii) required to maintain records of its testing and compliance with the FTC’s decision.

Background

Over the past 18 months, the FTC has sent clear warning signals to the tech industry, focusing on the use of facial recognition and its impact on gender and racial bias. In an earlier client alert, FTC’s New Policy on Biometric Information Creates New Federal and State Legal Risks, we highlighted FTC Chair Lina M. Khan’s focus on the 2023 policy statement covering Biometric Information and Section 5 of the Federal Trade Commission Act. The policy statement stressed the FTC’s “unwavering commitment to fairness,”1 alerting facial recognition companies that it would continue to enforce Section Five’s deceptive-acts prong toward entities who make false or unsubstantiated claims about the efficacy of their biometric technology.

The 2023 policy statement warned facial recognition developers against exaggerating their products’ capabilities for two crucial reasons: harm to competitors and harm to consumers. The FTC made clear that it was concerned about the risk of vendors inflating their software capabilities and its potential harm to honest technology vendors who do not oversell their products’ capabilities. Additionally, the FTC explained that the reliance by users of the technology on inflated representations would hurt consumers who are denied benefits, wrongly accused or otherwise adversely impacted by a product that does not perform as promised.2

FTC Complaint and Consent Order

According to the FTC, IntelliVision produces facial recognition software that is commonly integrated into home security systems to permit consumers to access their security panel using a scan of their face.

The FTC’s succinct, four-page complaint alleged that between 2018 and 2024, IntelliVision advertised its software ability to “detect faces of all ethnicities, without racial bias, and recognize them from a database of images.”3 The software had “zero gender or racial bias through model training with millions of faces from datasets from around the world.”4 Additionally, IntelliVision claimed that their facial recognition technology had “one of the highest accuracy rates on the market.”5 According to the FTC, IntelliVision could not support these assertions.

First, the FTC used the National Institute of Standards and Technology (NIST) public data to show that the error rates for IntelliVision’s algorithm (which IntelliVision submitted to NIST for evaluation) were “not one of the top-performing algorithms” and “were not among the top 100 best performing algorithms tested by NIST as of December 19, 2023.”6

Second, the FTC alleged that IntelliVision’s claim to have trained its facial recognition software on millions of faces was false because IntelliVision actually trained its technology on images of approximately 100,000 unique individuals and then used technology to create variants of those same images.

Third, the FTC alleged that IntelliVision lacked testing to support its assertions that its technology cannot be fooled by spoofing.

The FTC’s consent order, issued on January 8, 2025, directs IntelliVision to (among other things) no longer make any claims about its facial recognition software’s “effectiveness, accuracy, or lack of bias of such Facial Recognition Technology. . . unless [IntelliVision] possesses and relies upon competent and reliable testing.”7 The testing must have been completed in “an objective manner by qualified persons and be generally accepted by experts in the profession to yield accurate and reliable results.”8 Additionally, all testing shall be recorded to include “the date and results of the tests and the method and methodology used; the source and number of images used; the source and number of different people in the images.”9

The FTC’s Expanding Definition of Facial Recognition Technology

In the 2023 Biometric Information Policy Statement, the FTC defined “facial recognition technology” as a subset of “biometric information” that “includes, but is not limited to, depictions, images, descriptions, or recordings of an individual’s facial features” as well as “data derived from such depictions, images, descriptions, or recordings, to the extent that it would be reasonably possible to identify the person from whose information the data had been derived.”10 Albeit broad, this definition appeared to comport with the generally accepted understanding that “facial recognition technology” involves identification. 

A few months later, in the 2024 Rite Aid decision, the FTC started to move away from the idea that facial recognition technology must involve identification of a specific individual. Instead, the FTC started to focus on whether the technology could aid in generating an inference about an individual from their face: The FTC defined “Facial Recognition or Analysis System” as a system that “analyzes or uses depictions or images, descriptions, recordings, copies, measurements, or geometry of or related to an individual’s face to generate an Output,” with “Output” defined as “a match, alert, prediction, analysis, assessment, determination, recommendation, identification, calculation, candidate list, or inference that is generated by a machine-based system” processing the data gathered from an individual’s face.11 In other words, the FTC’s view appeared to be that “identification” was just one of many uses that can make something “facial recognition technology.”

The FTC’s latest definition in the IntelliVision decision confirms this. “Identification” remains one component: the first clause of the FTC’s latest definition describes facial recognition technology as “the automated or semi-automated process that can be used, singly or in combination with other data, to verify, authenticate, or ascertain a person’s identity based on the characteristics of their face, singly or in combination with other data.”12But the second clause — preceded by “or” — states that facial recognition is also “the automated or semi-automated process by which characteristics of a person’s face, singly or in combination with other data, are analyzed for inferences about an individual’s sentiment, emotional state, state of mind, personality, character, and other qualities including but not limited to veracity, state of attentiveness, and mood.”13This expanded definition makes clear that the FTC is no longer viewing “recognition” as limited solely to an individuals’ identity, but rather to a whole host of inferences/conclusions than can be extrapolated from an individual’s face.

What This Means for Companies

Since the issuance of its biometric policy in May 2023, the FTC has now twice used Section Five of the Federal Trade Commission Act to pursue claims against businesses that develop and use biometric technology in ways the Commission deems deceptive or unfair. In both of those complaints, the FTC referenced the potential for racial bias. While the FTC’s focus on biometric and facial recognition software testing to minimize the negative impact on members of racial groups may disappear with the new administration, the FTC will most likely still patrol the biometric technology world for companies that claim to have error-free facial recognition algorithms. Even if the FTC itself does not pursue enforcement actions, the 2023 policy statement and the FTC’s enforcement actions against Rite-Aid and IntelliVision provide a roadmap for state AGs and private plaintiffs. In addition, the FTC’s expanded definition of “facial recognition technology” foreshadows interest in / enforcement by the FTC involving face-scanning technology that would not fit within the traditional understanding of “facial recognition.”

Related Industries