October 17, 2024

Artificial Intelligence Briefing: Responsible Innovation and Increasing Regulations

This month, the NYDFS issued guidance on cybersecurity risks tied to AI for financial services firms, focusing on risks like AI-enabled attacks and third-party dependencies; California passed a law regulating AI in health insurance, ensuring AI cannot make health care decisions based on medical necessity without physician input; a federal judge halted two California laws aimed at curbing AI disinformation, citing First Amendment concerns; life insurers that do business in Colorado using external consumer data must report on governance and compliance but have a temporary reprieve on unfair discrimination testing; and the FTC launched “Operation AI Comply,” targeting AI misuse in consumer-facing products, while the OMB provided guidance to federal agencies on responsibly acquiring AI technology. Read on for a deeper dive into these key updates.

Regulatory, Legislative and Litigation Developments

  • New York DFS Issues Guidance on Cybersecurity Risks Relating to AI. On October 16, 2024, the New York Department of Financial Services (NYDFS) released an industry letter that provides guidance on cybersecurity risks associated with artificial intelligence and strategies for mitigating those risks. The guidance is directed at financial services companies subject to New York’s cybersecurity regulation and is not intended to impose new requirements beyond what’s already in Part 500. It explains risks relating to AI-enabled social engineering, AI-enhanced cybersecurity attacks, exposure of nonpublic information that is collected and processed for AI-related purposes, and increased vulnerabilities arising from third-party, vendor and supply chain dependencies. The guidance also lays out multiple strategies for addressing such risks.  
  • California Enacts AI Bill on Health Insurer Utilization Management Tools. California Governor Gavin Newsom signed legislation (SB 1120) on September 28, 2024, regulating the use of certain artificial intelligence tools, algorithms and software by health insurers and managed care (Knox-Keene) plans in California. Specifically, the law sets forth criteria that these tools must meet to be deployed in utilization management and requires that an “artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider…” Governor Newsom also signed a number of other AI bills that were described in a September 29 press release.  
  • Federal Court Enjoins Enforcement of New California AI Legislation. On October 2, 2024, a federal judge entered a preliminary injunction prohibiting the enforcement of two new California laws, AB 2655 and AB 2839, on the basis the laws likely violate the First Amendment. U.S. District Judge John A. Mendez (Eastern District of California) criticized the laws, stating that while the “risks posed by artificial intelligence and deepfakes are significant,” the laws as drafted would “act[] as a hammer instead of a scalpel,” and “bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.” Governor Newsom signed the bills into law to curtail the distribution of “disinformation powered by generative AI,” particularly in the political arena, following the distribution of an AI-generated parody of Kamala Harris to over 100 million individuals on X (formerly known as Twitter). 
  • Colorado Division of Insurance Waives Testing Requirement for 2024. Under Colorado Insurance Regulation 10-1-1, life insurers that use external consumer data and information sources (ECDIS) in any insurance practice are required to adopt a governance and risk management framework and submit a report to the Colorado Division of Insurance detailing their compliance by December 1. The report is supposed to include a description of testing performed by the insurer to detect unfair discrimination, but the division is waiving that requirement for this year’s report, as the division has yet to adopt a final ECDIS testing regulation. Insurers are still expected to comply with the other reporting requirements and will be expected to report on unfair discrimination testing starting in 2025. 
  • FTC Announces Crackdown on Deceptive AI Claims and Schemes. The Federal Trade Commission (the Commission) announced several new actions in “Operation AI Comply,” targeting uses of and claims about AI that cause consumer harm. First, the Commission unanimously approved filing three lawsuits against companies that claimed they could use AI to create and operate profitable online storefronts. In addition, the Commission was unanimous in filing a complaint against DoNotPay, which offers the “world’s first robot lawyer” that purportedly will “generate perfectly valid legal documents in no time.” But the Commission was divided on whether to file a complaint against Rytr, with two commissioners penning strong dissents. Rytr offers a subscription-based AI “writing assistant” marketed for 43 different use cases, one of which is “Testimonial & Review” generation, a use the majority alleged has “no or de minimus reasonable, legitimate use” but instead furnishes the “means and instrumentalities to deceive.” The dissenting commissioners decried what they saw as an unwarranted expansion of “means and instrumentalities” liability without a showing of either harm or scienter.  
  • OMB Issues Guidance on Responsible AI Acquisition for Federal Agencies. On October 3, 2024, the Office of Management and Budget (OMB) released guidance (M-24-18) to help federal agencies responsibly acquire artificial intelligence. This guidance builds on previous requirements for AI use in government, focusing on managing risks and performance, promoting market competition, and fostering interagency collaboration in AI procurement. Key aspects include involving privacy officials early in the acquisition process, negotiating contracts to protect government data and intellectual property, and implementing measures to prevent vendor lock-in. The guidance aims to leverage the government's significant purchasing power to drive responsible AI innovation while ensuring the technology is used to optimize services for the American people.
  • DOJ Updates Corporate Compliance Program Review to Include AI and Emerging Tech. On September 23, 2024, the Criminal Division of the U.S. Department of Justice (DOJ) published a revision to its Evaluation of Corporate Compliance Program (ECCP) that considers a company’s management of AI and other emerging technologies. The ECCP analysis is a significant factor in DOJ’s investigation, prosecution and sentencing determinations for a corporate entity, including the amount of any potential monetary penalty. DOJ directs prosecutors to evaluate how a company manages the impact of AI, evaluating whether a company is vulnerable to criminal schemes facilitated by AI, and —if a company uses AI in business — whether there are sufficient controls in place to ensure proper usage. In a recent speech on the update, Principal Deputy Assistant Attorney General Nicole M. Argentieri explained “prosecutors will consider the technology that a company and its employees use to conduct business, whether the company has conducted a risk assessment of the use of that technology, and whether the company has taken appropriate steps to mitigate any risk associated with the use of that technology.”
  • Second Algorithmic Price-Fixing Case Against Casino-Hotels Dismissed. On October 1, 2024, Judge Karen Williams of the U.S. District Court for the District of New Jersey dismissed with prejudice an antitrust class action lawsuit alleging that casino-hotels in Atlantic City conspired to fix the price of hotel rooms via shared use of pricing algorithms. Recognizing the “unique antitrust theory [p]laintiffs have proposed,” the decision provides several reasons why the operative complaint failed to plausibly allege a horizontal agreement among the casino-hotel competitors. Of significance, Judge Williams held the “[c]omplaint does not allege that the Casino-Hotels’ proprietary data are pooled or otherwise comingled into a common dataset against which the algorithm runs,” and the “[c]ourt cannot infer a plausible price-fixing agreement between the Casino-Hotels from the mere fact that they all use the same pricing software.” This decision is consistent with the May 2024 dismissal of a nearly identical algorithmic price-fixing suit against casino-hotels on the Las Vegas strip. The cases are Cornish-Adebiyi v. Caesars Entertainment Inc., Case No. 1:23-cv-02536, in the District of New Jersey and Gibson v. Cendyn Group, LLC, Case No. 2:23-cv-00140, in the District of Nevada.
  • Congressional Staff Association on AI. A group of congressional staffers has launched the first staff association on artificial intelligence (CSA.ai). CSA.ai is sponsored by Reps. Mark Green (R-TN), Ted Lieu (D-CA), Zach Nunn (R-IA) and Don Beyer (D-VA), and their staffers comprise the group’s executive board. The group hasn’t chosen its AI-specific topics of focus yet but are considering, among other things, examining the progress of Biden’s Executive Order on AI and whether existing laws are sufficient to guide the use of AI across sectors. The group intends to host a monthly speaker series, expert panels, training sessions and more. Alexandra Seymour (CSA.ai president) acknowledged several existing bills that call for auditing and testing of AI models and would like to increase staff’s technical understanding of that topic. Another leader of the group stated that the House Bipartisan Task Force on Artificial Intelligence is expected to release a substantive proposal in November. 
  • OECD and FSB Release Findings on AI in Financial Services. The Organization for Economic Co-operation and Development (OECD) and the Financial Stability Board (FSB) recently released findings from their Roundtable on Artificial Intelligence in Finance hosted earlier this year; the keynote speech was offered by Nellie Liang, Under Secretary for Domestic Finance, U.S. Treasury, and Chair of FSB Standing Committee on Assessment of Vulnerabilities. Their findings include a familiar list of the pros and cons of using AI in the banking, insurance and asset management sectors but also note potential financial stability risks, including amplification of interconnectedness, opacity and complexity. The findings conclude with a call for policymakers to “promote the safe use of AI in financial services,” focusing on risk-based approaches for model risk management and international cooperation to develop standards and share good practices.  Regulators must also assess their regulatory capabilities in this area.

Related Topics