Artificial Intelligence Briefing: White House Announces the Completion of Agency Actions Required by the AI Executive Order
A week before the EU’s AI Act comes into force on August 1, the White House announced that U.S. agencies have completed all actions required by President Biden’s AI Executive Order to date. Meanwhile, the Republican Party Platform calls for the Executive Order’s reversal. Nonetheless, AI regulation may be one of the few nonpartisan issues for voters, as recent polling suggests that 75% of Democrats and 75% of Republicans believe that a careful and slow approach to AI is preferable to being the first country to achieve powerful AI breakthroughs. We’re diving into these developments and more in the latest briefing.
Regulatory, Legislative and Litigation Developments
- EU Artificial Intelligence Act Comes Into Force on August 1, 2024. The European Union Artificial Intelligence Act (AI Act) was published on July 12, 2024, in the Official Journal of the European Union and will enter into force on August 1. The AI Act is one of the most keenly anticipated pieces of legislation globally and has been subject to intense negotiation and revision since its inception in 2021. We considered the provisions of the AI Act in detail in our previous article. Various obligations will come into force in phases — between six and 36 months following August 1, 2024. While the timetable for compliance may seem relatively long, developing an AI compliance program is a time-consuming process; and businesses will need to start as soon as possible. Many of the obligations under the AI Act are drafted in relatively high-level terms, and detailed guidance is not likely to emerge for several months. In the meantime, businesses should press on with their compliance programs by taking a risk-based approach and benchmarking against emerging industry norms and the general practice of industry peers.
- UK’s Position on AI Regulation in the King’s Speech. On July 17, 2024, the new UK government announced its plans for the parliamentary term in the King’s Speech. The UK government will look to implement legislation to regulate the development of AI, in particular the “most powerful” AI models. It remains to be seen whether this will mean a specific AI Act that mirrors, or closely follows the EU’s AI Act (or parts of it), or a piecemeal amendment to legislation in other areas. The exact focus of the legislation (and whether it will solely be focused on large, powerful AI models) is yet to be determined. Whatever form the legislation takes, it marks a clear departure from the previous administration, which favoured a more light-touch, voluntary approach.
- White House Announces Continued Implementation of Executive Order. On July 26, 2024, the White House announced that federal agencies have completed all actions required by President Biden’s AI Executive Order in advance of the 270-day deadline. Among other milestones, the National Institute of Standards and Technology (NIST) and the U.S. AI Safety Institute announced five AI-related deliverables. The deliverables include draft and final guidance documents and open-source software that would allow users to test how their AI models and systems respond to adversarial attacks. The software, which is free to download, responds to the Executive Order’s directive that NIST help with model testing that permits users to learn how often and under what circumstances their AI systems and models might fail. While three of the guidance documents issued by NIST represented final versions of previously published guidance (relating to generative AI risks, and management and support for the development of global AI standards), the remaining guidance document offered an initial public draft of guidance relating to misuse of AI systems and dual-use foundation models for nefarious and harmful purposes. NIST is accepting comments on that draft guidance until September 9.
- FTC Launches Investigation of AI-Powered Surveillance Pricing. On July 23, 2024, the Federal Trade Commission issued subpoenas to eight companies offering artificial intelligence and data-driven pricing tools, seeking information on how these technologies may impact consumers. The companies advertise AI-powered solutions that can set individualized prices for customers based on personal data such as location, demographics, credit history and online behavior. The FTC is investigating this practice, which it terms “surveillance pricing,” due to concerns about potential harms to consumer privacy and fair pricing. Using its authority under Section 6(b) of the FTC Act, the agency is demanding details on the types of pricing products and services offered, data collection methods, customer information, and the impact on consumers and prices. The investigation aims to help the FTC better understand how AI-driven pricing strategies may affect consumer privacy, competition and protection in the marketplace.
- FCC Proposes AI-Generated Robocall Rules. On July 17, 2024, the Federal Communications Commission released a draft notice of proposed rulemaking (NPRM) regarding AI-generated robocalls and robotexts. The NPRM, if approved, would seek comment on numerous matters, including the definition of AI-generated calls and a proposed requirement that callers disclose their use of AI-generated calls. Specifically, the proposal would require callers to disclose, when obtaining consent, “that the caller intends to make use of AI-technology to generate voice or text content” and to disclose at the beginning of each message “whether the call uses an artificial intelligence-generated voice.” The proposal is being considered by the Commission in its August 7 Open Meeting.
- Six Federal Agencies Issue Rule on Automated Valuation Models. On June 24, 2024, six federal agencies issued a final rule to implement quality control standards for the use of automated valuation models (AVMs) in determining the value of a mortgage secured by a consumer’s principal dwelling. Institutions that use such AVMs must adopt certain policies and controls designed to meet five factors: ensuring a high level of confidence in the AVM, requiring random sample testing, seeking to avoid conflicts of interest, protecting against the manipulation of data, and complying with nondiscrimination laws. Instead of prescribing specifics, the final rule allows the applicable institutions to adopt quality control measures as appropriate.
- HHS Aims to Centralize Artificial Intelligence, Cybersecurity and Data Policymaking. On July 25, 2024, the U.S. Department of Health and Human Services (HHS) announced in the Federal Register a “strategic reorganization” that has culminated in the consolidation of responsibility for policy and operations specific to data and technology in health care and human services with a newly named Assistant Secretary for Technology Policy and Office of the National Coordinator (ASTP/ONC). Notably, the new ASTP/ONC will house the HHS-wide roles of chief AI officer, among other positions. The ASTP/ONC isn’t fully filled out yet, as HHS is currently recruiting the important positions of chief technology officer, chief AI officer and chief data officer. In addition, the public-private effort between the health sector and the federal government on cybersecurity (405(d) Program) will move to the assistant secretary of strategic preparedness and response (ASPR), joining the other health sector cybersecurity activities already located in ASPR’s Office of Critical Infrastructure Protection, and advancing the department’s one-stop-shop health care cybersecurity policy. A webinar is scheduled for August 1 to discuss these changes further. The centralization of subject matter expertise within the ASTP/ONC and ASPR could bring greater influence over federal regulation of health insurers by these entities and closer coordination with other HHS components, including CMS and the Office of Civil Rights, with jurisdiction over HIPAA-covered entities.
- Republican Platform Calls for Reversal of Biden’s AI Executive Order. The newly adopted Republican Party Platform has vowed to repeal President Biden’s October 2023 Executive Order directing the federal government to conduct sweeping regulation of AI and related emerging technology. The platform calls Biden’s AI Executive Order “dangerous,” stating it will hinder AI innovation and impose “Radical Leftwing ideas” on the development of the technology. Some tech industry leaders and institutes similarly support repeal of the Executive Order, fearing that it stifles new innovators from entering the AI market and requires the government to create new and complex burdens on less established businesses. However, other industry leaders and consumer advocates find the call to repeal the Order worrisome, citing concerns over bias, privacy and national security that the Order seeks to address. AI regulation may be one of the few nonpartisan issues for voters, as recent polling suggests that 75% of Democrats and 75% of Republicans believe that a careful and slow approach to AI is preferable to being the first country to achieve powerful AI breakthroughs.
- Senate Committee Discusses AI and the Need to Protect Americans’ Privacy. On July 11, 2024, the Senate Commerce, Science, and Transportation Committee convened to discuss the growing concerns about AI’s ability to extract sensitive insights from personal data, which poses significant risks to consumer privacy. Chairwoman Cantwell linked these concerns to her proposed privacy legislation, which faces challenges due to its potential to preempt state laws, allow private right of action and impact various technologies reliant on AI. The discussion underscored AI’s advancement beyond simple cookie data to aggregate personal information for model training. Democrats expressed concern that without federal privacy laws, companies lack the incentive to safeguard privacy, potentially enabling discriminatory practices, deepfakes and AI scams, and aiding malicious actors. Republicans expressed concerns that such legislation could stifle innovation and impose burdensome regulations. Witnesses suggested empowering the FTC and clarifying data ownership rights to protect consumers and foster transparency in AI development. Some also cautioned about the financial strain and legal complexities small businesses may face in complying with stringent privacy laws.
- HFSC Leadership Releases AI Working Group Staff Report. On July 18, 2024, House Financial Services Committee Chair Patrick McHenry (R-NC) and Ranking Member Maxine Waters (D-CA) released the Bipartisan Working Group on Artificial Intelligence’s staff report. While the report does not set forth a specific legislative roadmap, it reflects information learned during several roundtables regarding AI use cases across financial services and housing, risks and benefits of AI, and potential barriers to using the technology. The report suggests the need for ongoing involvement of the Financial Services Committee in overseeing the implementation of AI tools into the financial and housing industries. In particular, the drafters expressed a concern with ensuring the use of AI tools did not lead to bias or discrimination in decision-making, as well as enforcement of existing anti-discrimination laws and ongoing review and updating of federal laws that govern financial institutions and data, including the Gramm-Leach-Bliley Act and Fair Credit Reporting Act, to ensure data privacy protections. It also recommends the committee: (1) assess regulatory gaps related to AI; (2) ensure financial regulators have the appropriate tools to oversee new products and services; (3) consider reforms to data privacy laws; (4) work with financial regulators to understand the impact of AI; and (5) ensure the U.S. is a global leader on AI use and development. The committee held a related hearing on July 23 on Insights into AI Applications in Financial Services and Housing.
- FINRA Issues Guidance on Generative AI. On June 27, 2024, FINRA issued Regulatory Notice 24-09 to remind its member firms that FINRA’s rules continue to apply when firms use generative AI technologies. As an example, if a firm uses generative AI as part of its supervisory system, the firm should have policies and procedures to address the integrity and reliability of the underlying AI model. FINRA also reminds firms of the need for evaluations of generative AI technologies before deployment. The notice states that further guidance on how specific rules may apply to specific AI use cases may be released in the future.
- NYDFS Adopts Circular Letter on the Use of AI. The New York Department of Financial Services adopted Insurance Circular Letter No. 7 regarding the Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing on July 11, 2024. The final circular largely mirrors the proposed circular released on January 17 and provides guidance for all insurers licensed to write insurance in the state of New York. The circular covers a variety of matters, including documentation, training, board of director and senior management oversight, risk management, third-party vendors, and unfair and unlawful discrimination. Notably, the final circular letter states that insurers should conduct qualitative and quantitative testing of their AI systems and external consumer data and information sources to demonstrate no unfair or unlawful discrimination will occur.
- NAIC Accelerated Underwriting Working Group Exposes Revised Guidance Document. On July 11, 2024, the National Association of Insurance Commissioners Accelerated Underwriting Working Group reviewed comments received on its Accelerated Underwriting Regulatory Guidance draft, and exposed a revised version for comment until July 26. The document aims to provide regulators with guidance when reviewing life insurers’ use of accelerated underwriting programs, and includes considerations related to algorithms, predictive models, data inputs and external data sources. The working group will discuss comments received, and possibly adopt the guidance, on its August 6 call.
- Mobley v. Workday Decision Sets Precedent for AI Vendor Liability in Employment. The Northern District of California recently issued an important ruling allowing a novel theory of liability to move forward against an AI vendor in the Mobley v. Workday class action litigation. In particular, the court allowed employment discrimination claims to move forward against Workday as the “agent” of its client-employers that used its AI software in its recruiting practices. The court rejected Workday’s argument that it simply implemented the criteria established by its clients. Instead, the court held that Mobley’s complaint “plausibly alleges that Workday’s customers delegate traditional hiring function, including rejecting applicants, to the algorithmic decision-making tools provided by Workday” by generally alleging that “Workday’s software is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process by recommending some candidates to move forward and rejecting others.” The court explained its rationale, stating that “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject.” Accordingly, “[d]rawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era.” The ruling is a first step in setting precedent for AI vendor liability in the employment context and will likely lead to extensive discovery into Workday’s AI algorithms and the use of the tools in the hiring processes of its client-employers.
- Judge Dismisses Majority of GitHub Copyright Claims. On June 24, 2024, Judge Jon Tigar significantly narrowed the claims in the class action lawsuit filed by software developers against Microsoft, OpenAI and GitHub related to the use of their software code (without permission) to train the GitHub Copilot tool. The dismissed claims include allegations asserting a violation of the Digital Millennium Copyright Act (DMCA) as well as certain state law violations. Judge Tigar’s dismissal centers on the fact that the plaintiffs were unable to show Copilot had produced an identical copy of any original work. The remaining claims are related to a breach of contract of the open-source license.
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.