Artificial Intelligence Briefing: Trump Names AI and Crypto Czar, and Other Regulatory, Legislative and Litigation Developments
This month we cover developments involving the Federal Trade Commission, the Centers for Medicare & Medicaid Services, the Financial Stability Board, the California Privacy Protection Agency, the Colorado Division of Insurance and more in our latest briefing.
Regulatory, Legislative and Litigation Developments
Trump Names AI and Crypto Czar
David Sacks, a Silicon Valley venture capitalist and conservative donor, will serve as White House AI and crypto czar under President-elect Trump. Undoing President Biden’s AI Executive Order probably won’t be the first order of business on inauguration day, but look for the Trump administration to chart a new course when it comes to AI: one that prioritizes innovation and American competitiveness over regulation and consumer protection. If that happens, look for blue states to pick up the regulatory baton.
CFPB Proposes Rule to Stop Data Brokers From Selling Sensitive Personal Data to Scammers, Stalkers and Spies
The Consumer Financial Protection Bureau (CFPB) has proposed a rule to classify data brokers as consumer reporting agencies under the Fair Credit Reporting Act (FCRA), thereby subjecting them to stricter regulation. The measure aims to prevent the unauthorized sale of sensitive personal and financial information, such as Social Security numbers, phone numbers, income and credit scores, to malicious entities including scammers and foreign adversaries. Through the rule, CFPB seeks to reduce risks that include the criminal exploitation of vulnerable consumers, stalking of law enforcement personnel and domestic violence victims, and surveillance or blackmail operations against individuals in possession of critical national security information. The CFPB noted that the rapid advance of artificial intelligence amplifies all of these risks by enabling re-identification of sensitive data previously assumed to have been adequately de-identified.
DOJ’s Proposed “Bulk Sensitive Personal Data Transfer” Rule Implicates Artificial Intelligence Use and Development
As discussed in a recent Legal Intelligencer article by Faegre Drinker attorneys, the Department of Justice has issued a Notice of Proposed Rule Making (NPRM) for a rule that would regulate certain transactions involving bulk sensitive personal data with “countries of concern” and “covered persons.” The NPRM discusses several instances of AI usage or development by U.S.-based entities that may lead to coverage and exposure to civil or criminal liability under the rule. In one example, the NPRM describes a hypothetical U.S. entity that develops an AI chatbot trained using bulk sensitive personal data, licenses the chatbot for use in a country of concern, and then users in that country are able to prompt the chatbot to disclose the training data. The NPRM states that this would be “data brokerage” and constitute a prohibited transaction under the rule that could lead to substantial civil or criminal penalties: “Even though the license did not explicitly provide access to the data, this is a prohibited transaction because the U.S. company knew or should have known that the use of the chatbot pursuant to the license could be used to obtain access to the training data, and because the U.S. company licensed the product to covered persons.”
FTC Takes Action Against AI Facial Recognition Company for Alleged Deceptive Claims
The Federal Trade Commission has announced enforcement action against IntelliVision Technologies Corp., alleging the company made false and misleading claims about its AI-powered facial recognition software’s accuracy and freedom from gender and racial bias. According to the FTC’s complaint, testing by the National Institute of Standards and Technology (NIST) revealed performance disparities across different demographic groups, and that the company’s software was not among the top 100 facial recognition algorithms tested by NIST. The Commission also alleged that IntelliVision misrepresented the scale of its training data, claiming “millions” of faces when in fact using only approximately 100,000 unique individuals with computer-generated variants, and made unsupported claims about its anti-spoofing capabilities. Under the proposed consent order, which follows a unanimous 5-0 Commission vote, IntelliVision will be prohibited from making misrepresentations about its technology’s accuracy, efficacy or comparative performance across different demographic groups without competent and reliable testing to support such claims.
CMS Draft Rule Sets Medicare Advantage Requirements for AI
On November 26, the Centers for Medicare & Medicaid Services released a proposed rule that will in all likelihood conclude the Biden administration’s rulemaking under Medicare Advantage (MA) and Medicare Part D prescription drug coverage. The rule is far-ranging and unlikely to be finalized in its current form, if at all. Still, the proposals addressing the use of artificial intelligence and automated systems in providing MA services are worth reviewing and could influence the direction of policymaking in the years to come. Here are our main takeaways:
- CMS proposed to reinterpret its existing regulations under 42 CFR 422.112(a)(8) to clarify that “irrespective of delivery method or origin, whether from human or automated systems,” MA organizations must “ensur[e] equitable access to MA services.” Further, CMS specifies that “[a]rtificial intelligence or automated systems, if utilized, must be used in a manner that preserves equitable access to MA services.”
- CMS seeks to provide additional clarity about what artificial intelligence or an automated system is within the meaning of the MA program regulations by explicitly defining these terms under proposed 422.2 — based on the Biden administration’s Blueprint for an AI Bill of Rights for the definition of “automated system” and the codified National Artificial Intelligence Initiative Act of 2020’s definition of “artificial intelligence.” The terms “patient care decision support tool” and “passive computing infrastructure” are also defined.
- Ultimately, the proposals do not amount to significant new obligations for MA organizations in practice, but the definitions may bring some welcome clarity about how CMS (at least under the Biden administration) would define what is and is not artificial intelligence and an automated system — terms that are squishy at best.
- It is curious that CMS’s proposal does not carry over to obligations of Part D sponsors, given the use of various automated systems in this program as well. CMS’s uncharacteristic restraint in this regard may indicate a self-identified legal authority shortfall in the changed post-Chevron environment.
FSB Warns of AI Adoption Risks to Financial Stability and Calls for Stronger Regulatory Oversight
On November 14, the Financial Stability Board (FSB) released “The Financial Stability Implications of Artificial Intelligence,” a comprehensive report examining how AI adoption in finance could affect global financial stability. While acknowledging AI’s potential benefits for operational efficiency, regulatory compliance, and personalized financial services, the FSB identifies four key vulnerabilities that could increase systemic risk: third-party dependencies and AI service provider concentration, increased market correlations from widespread use of similar AI models and data, heightened cybersecurity risks including from AI-enabled attacks, and elevated model risk due to AI systems’ limited explainability and data quality challenges. The report notes that while many financial institutions appear to be taking a go-slow approach to generative AI, competitive pressures and the technology’s accessibility could accelerate adoption. The FSB recommends that authorities enhance their monitoring of AI developments, assess whether current regulatory frameworks adequately address AI-related vulnerabilities, and strengthen their supervisory capabilities, including through increased cross-border cooperation.
Bank of England and UK FCA Findings on Third Survey of AI and ML in Financial Services
On November 21, the Bank of England and the Financial Conduct Authority recently issued their third joint survey on the use of AI in financial services. The survey reveals that 75% of financial firms are already using AI, with an additional 10% planning for adoption within the next three years. While operations, IT, risk and compliance, and sales and marketing are key uses, an increasing number of firms are using AI in asset management, investment banking, HR and legal functions, including a significant number of AI use cases which involve automated decision making. Despite the high adoption rate, only 34% of firms surveyed report having a complete understanding of the AI technologies.
IAIS Publishes AI Application Paper
On November 18, the International Association of Insurance Supervisors (IAIS) published its Draft Application Paper on the Supervision of Artificial Intelligence for consultation. While application papers do not establish new standards, this one considers how existing IAIS Insurance Core Principles should apply to insurers and intermediaries that deploy AI systems. There’s a lot to digest, but if you want a sense of what the international-types are saying, check out Section 3.4, “Human oversight and allocation of management responsibilities,” which calls for more board involvement than we’re used to in the U.S.; Section 5, “Transparency and explainability,” which includes a more robust discussion of explainability than is contained in the NAIC’s AI model bulletin; and Table 3, which should be helpful for companies struggling to develop an AI risk assessment methodology (you know who you are).
California Advances Rulemaking to Update CPPA Regulations on Automated Decision-Making Technology
On November 8, the California Privacy Protection Agency board voted to commence the formal rulemaking process to update California Consumer Protection Act regulations, clarify when insurance companies must comply, and operationalize requirements for conducting risk assessments and annual cybersecurity audits. Notably, the draft regulations include provisions that will impact businesses’ use of automated decision-making technology (ADMT), by requiring businesses to allow consumers to opt-out of the use of ADMT, provide consumers information in pre-use notices and in responses to inquiries regarding ADMT, and evaluate whether certain ADMT uses discriminate based on protected classes. The public comment period on the draft regulations closes on January 14, 2025.
Colorado Proposes Amended ECDIS Governance and Risk Management Regulation
On December 6, the Colorado Division of Insurance released a proposed amended regulation setting forth governance and risk management requirements for insurers that (i) offer life insurance, private passenger auto insurance or health benefit plans in Colorado and (ii) use external consumer data or information sources (ECDIS) in any insurance practice. The proposal modifies the existing requirements for life insurers and extends the requirements to private passenger auto insurers and health insurers. Comments are due on December 13.
States Adopt NAIC AI Model Bulletin
Iowa, Oklahoma and Massachusetts have adopted the National Association of Insurance Commissioners’ AI model bulletin, bringing the total number of adopting jurisdictions to 20. The bulletin sets forth regulatory expectations for insurers that use AI systems, including implementation of a robust AI governance and risk management framework.
Class Action Alleges Persona Violated Privacy Laws by Using Government IDs to Train AI Models
A purported class action was filed in the Northern District of Illinois against Persona Identities, Inc. (Persona) alleging that the company impermissibly collected protected personally identifying information from government-issued identification cards, then used the information to train its machine learning and artificial intelligence algorithms in violation of the Illinois Identification Card Act and Illinois Driver’s License Act. Persona sells online identity verification software that compares user-submitted selfies with government-issued identification to confirm and authenticate an individual’s identification. The complaint alleges delivery companies such as DoorDash use Persona’s product to verify the identification of its delivery drivers. Under the Illinois acts, information obtained from an Illinois identification card or driver’s license “may only be used for purposes of identification of the individual or for completing the commercial transaction in which the information was obtained,” and “may not be used for purposes unrelated to the transaction,” but Persona is alleged to have collected, analyzed and trained its software based on PII submitted to it during the normal course of consumers using and implementing the software. The purported class bases its allegations on its interpretation of Persona’s privacy policy, which states that Persona may use uploaded identification to “improve our product and service offerings” and “conduct research,” which the plaintiffs allege “suggests that Persona utilizes the data it collects, including PII collected by its API, to enhance the functionality and accuracy of its services — all of which exceed the scope of use permitted by law.” Persona’s deadline to respond to the complaint is January 2, 2025.
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.