最新
May 10, 2024
Artificial Intelligence Briefing: White House Issues AI Progress Report
Regulatory, Legislative and Litigation Developments
- Federal Agencies and the White House Issue Six Month Update on AI Activities Outlined in President Biden’s Executive Order on AI. Since President Biden issued his sweeping Executive Order on the Safe, Secure and Trustworthy Development of AI (EO) last year, federal agencies have been racing to meet the EO’s lofty expectations and tight deadlines. Last week brought a flurry of activity and agency announcements as we hit the EO’s 180-day mark, and the White House released a summary of the key developments. Here are just a few (and you can read more in our legal update):
- The National Institute of Standards and Technology (NIST) released drafts of four guidance documents to address generative AI risks and mitigation, the reduction of threats to data that is used to train AI systems, and the development of global AI standards.
- The U.S. Patent and Trademark Office published a request for comment seeking feedback on how AI might affect evaluations of how the level of ordinary skill in the arts are made to determine if an invention is patentable.
- The Department of Labor issued a guide on mitigating harm in AI-informed employment decisions.
- A Framework on Nucleic Acid Synthesis Screening will soon establish requirements for recipients of federal R&D funds.
- The Department of Homeland Security launched an AI Safety and Security Board.
- The White House announced funding and other initiatives to harness AI’s potential and to hire AI talent into government roles.
- HHS Finalizes ACA Section 1557 Nondiscrimination Rules Addressing Patient Care Decision Support Tools by Providers and Health Insurers. By May 1, 2025, providers and health insurers alike will need to comply with new rules finalized last week by the Health and Human Services Department’s Office for Civil Rights (OCR). The rules require that a covered entity “make reasonable efforts to identify uses of patient care decision support tools in its health programs or activities that employ input variables or factors that measure race, color, national origin, sex, age, or disability.” For each such use identified, the covered entity must make reasonable efforts to mitigate the risk of any resulting discrimination. OCR’s AI policymaking is not finished: the final rules contain a request for additional comment on other tools that do not impact patient care directly but whose use may result in unlawful discrimination, such as tools used by payers for provider reimbursement and fraud, waste and abuse.
- HUD Issues Guidance on Digital Advertising. If your organization advertises through digital platforms (and it probably does), you might want to check out new guidance from the Department of Housing and Urban Development. While the guidance focuses on application of the Fair Housing Act to the advertising of housing, credit and other real estate-related transactions (like homeowners insurance), it does a nice job of explaining how advertising through digital platforms can result in unintentional discrimination. The guidance also includes a few recommendations for mitigating that possibility.
- SEC Settles Charges Against Investment Firms for “AI Washing.” On March 18, the Securities and Exchange Commission (Commission) announced settled charges against two investment firms concerning “AI washing,” which involved the making of false or misleading statements regarding a company’s use of AI. The Commission’s order against investment advisor Delphia (USA) Inc. highlighted the company’s false and misleading statements in filings, advertisements and social media posts touting AI capabilities in its investment process that the company did not have. Similarly, the Commission’s order against Global Predictions, Inc. referenced the company’s false and misleading AI claims featured on its website and social media sites. In announcing the settled charges in which the firms agreed to pay civil penalties, both Commission Chair Gary Gensler and Director of Enforcement Gurbir Grewal reiterated the Commission’s focus on protecting investors from misleading AI representations.
- Bi-Partisan Privacy Proposal Includes AI Provisions. The draft American Privacy Rights Act released on April 7 includes several provisions that could impact covered entities’ use of certain algorithms when advertising for or determining access to their products or services. Covered entities could be required to conduct impact assessments, perform algorithm design evaluations and offer individuals the right to opt out of the entity's use of certain algorithms. Among other things, the act would apply to the use of algorithms that pose a consequential risk of harm, including with respect to housing, education, employment, health care, insurance and credit opportunities.
- Connecticut AI Bill Fails. The Connecticut legislature has adjourned without enacting Senate Bill 2, which was primarily focused on preventing algorithmic discrimination and promoting the safe and transparent use of AI. The bill, which was passed by the Senate, would have required AI developers to provide information and documentation to deployers regarding any high-risk AI systems they develop and post on their website a public use case inventory of their high-risk AI systems. Deployers of high-risk AI systems would have been required to implement a risk management policy and program related to AI, conduct impact assessments, regularly check for algorithmic discrimination and provide notice to consumers when a high-risk system is a controlling factor in making a consequential decision about the consumer.
- NAIC Third-Party Task Force Releases Proposed Workplan. On April 6, the National Association of Insurance Commissioners Third-Party Data and Models (H) Task Force exposed its draft workplan for a 30-day public comment period. The task force is charged with developing and proposing a framework for the regulatory oversight of third-party data and predictive models. Meanwhile, 11 states have adopted the NAIC’s AI model bulletin, with more on the way.
What We’re Reading
- Advancements in AI Large Language Model. U.S.-based AI company Anthropic’s new AI model, Claude 3, released in March 2024, has shown exceptional performance, surpassing OpenAI's GPT-4 in tasks like summarizing extensive documents and complex problem-solving. While some testing behaviors suggest a level of self-awareness, experts clarify that Claude 3 remains a tool for pattern recognition without true sentience. This model's capabilities, including its quick response times and reduced error rate, underscore its potential to revolutionize various sectors by enhancing efficiency and accuracy. The release of Claude 3 will undoubtedly prompt further dialogue about the ethical use and regulatory oversight of advanced AI systems.