Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
September 27, 2022

Artificial Intelligence Briefing: FTC Holds Forum on Commercial Surveillance and Data Security

Our latest briefing explores the recent FTC commercial surveillance and data security forum (including discussion on widespread use of AI and algorithms in advertising), California’s inquiry into potentially discriminatory health care algorithms, and the recent California Department of Insurance workshop that could shape future rulemaking regarding the industry’s use of artificial intelligence, machine learning and algorithms.

Regulatory and Legislative Developments

  • FTC Forum on Commercial Surveillance & Data Security Touches on AI: The Federal Trade Commission held a five-hour public forum on September 8 regarding its Advance Notice of Proposed Rulemaking on commercial surveillance and lax data security practices that have the potential to harm consumers. (The FTC uses the term “commercial surveillance” to describe the business of collecting, analyzing and profiting from information about people.) During the forum, industry and consumer representatives spoke on a range of data-related topics, including the widespread use of AI and algorithms in the advertising context. If the FTC determines that harmful commercial surveillance practices are prevalent, it will issue a Notice of Proposed Rule Making, which will include the text of a proposed rule. A transcript of the forum can be viewed here.
  • White House Announces Core Principles for Tech Platform Accountability: Also on September 8, the White House held a listening session on tech platforms and the need for greater accountability. In connection with the event, the administration announced six core principles for reform that include establishing “strong protections to ensure algorithms do not discriminate against protected groups, such as by failing to share key opportunities equally, by discriminatorily exposing vulnerable communities to risky products, or through persistent surveillance.”
  • National Association of Insurance Commissioners Summit Includes Two Collaboration Forum Sessions on AI: The NAIC Insurance Summit, held in Kansas City and virtually, featured two sessions on the Collaboration Forum’s Algorithmic Bias Project. The first session provided an overview of the European Union’s AI Act and recent developments at the Consumer Financial Protection Bureau and Federal Trade Commission. During the second session, representatives from the National Institute of Standards and Technology and American National Standards Institute described their efforts with respect to AI and ML. The next Collaboration Forum discussion will occur on October 14 during a joint call of the NAIC’s Innovation, Cybersecurity, and Technology (H) Committee and Consumer Liaison Committee. 
  • California Department of Insurance Holds AI Workshop: On September 21, the California Department of Insurance (CDI) held a workshop examining potential bias and discrimination in the insurance industry’s use of artificial intelligence, machine learning and algorithms. Commissioner Ricardo Lara described the workshop as a starting place for future CDI rulemaking that will focus on privacy, transparency, representation and fairness. Jon Phenix of the CDI described federal and state insurance regulatory efforts to date, Dorothy Andrews of the NAIC presented on bias in modeling processes and Cathy O’Neil (ORCAA) described how algorithms can unintentionally discriminate. The workshop also featured comments from industry and the public.
  • D.C. Council Hearing on Stop Discrimination by Algorithms Act: On September 22, the Council of the District of Columbia’s Committee on Government Operations and Facilities held a seven-hour public hearing on the Stop Discrimination by Algorithms Act of 2021. The bill would regulate algorithmic decision-making for insurance, credit, education, employment and housing; require annual algorithmic audits and reports; impose penalties for non-compliance and create a private cause of action. There was significant opposition to the bill, with many witnesses asserting that the bill is overly broad, lacks flexibility, would burden companies and has an unworkable one-size-fits-all approach. Those in support of the bill primarily applauded its focus on transparency, auditing, impact assessments and private right of action. Written testimony will be accepted until October 6. 
  • California Launches Inquiry Into Potentially Discriminatory Health Care Algorithms: California Attorney General Rob Bonta is looking into commercial health care algorithms used by hospitals and other healthcare providers in making decisions about patient care. Bonta sent letters to hospital CEOs across the state seeking information on what algorithms are in use and what steps are being taken to ensure that they do not have a disparate impact based on race or other protected characteristics. 
  • EEOC and DOL Hold Roundtable on AI: On September 13, the Equal Employment Opportunity Commission (EEOC) and the Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) held a virtual roundtable to explore the civil rights implications of AI and other automated technology used in recruiting and hiring workers. Roundtable participants identified ways that hiring technologies can both promote equal employment opportunities and discriminate based on race, sex, disability and age. The EEOC and OFCCP emphasized their commitment to addressing barriers to hiring and recruiting employees from historically underrepresented communities. A recording of the roundtable can be viewed on YouTube.
  • RGA Corporate Policy Summit Holds Panel on AI: The Republican Governors Association (RGA) concluded its Corporate Policy Summit in Atlanta with a government and industry panel to discuss artificial intelligence. Governors Pete Ricketts (NE) and Asa Hutchinson (AR) led a conversation with representatives from Elevance Health, Intuit and the U.S. Chamber of Commerce to level set on the meaning of AI, how it’s used and recommendations for governors. When discussing regulation, panelists expressed a collective view that existing laws applicable to consumer data and privacy should be leveraged before regulating the use of AI and warned against overregulation. However, it was noted that industry needs to take the appropriate steps to protect consumers and ensure that AI use is explainable. 
  • New York City Proposes Rules Related to the Use of Automated Employment Decision Tools:  The New York City of Department of Consumer and Worker Protection issued proposed rules to implement new legislation governing automated employment decision tools. Among other things, the rules would clarify the requirements for bias audits that are required by the legislation. A public hearing on the proposed rules is scheduled for October 24.  

What We’re Reading

Key Upcoming Events

  • NIST to Hold Workshop on Second Draft of AI Risk Management Framework: The National Institute of Standards and Technology will hold a virtual workshop on October 18-19 to discuss the revised draft of its AI risk management framework. Check out our recent blog on the framework here.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

Related Topics