August 01, 2023

Artificial Intelligence Briefing: NAIC Lays Out Regulatory Guidance in 10-Page Bulletin

The NAIC issues a 10-page bulletin laying out regulatory guidance regarding the use of algorithms, AI and predictive models; Connecticut lawmakers address the state’s use of AI; and leading AI companies meet at the White House to voluntarily commit to safeguards against key risks associated with AI. We explore these developments and more in the latest briefing.

Regulatory and Legislative Developments

  • NAIC Exposes Model Bulletin for Comment. The National Association of Insurance Commissioners has released its draft model bulletin regarding use of algorithms, predictive models and artificial intelligence systems by insurers. The 10-page bulletin lays out regulatory guidance and expectations, including with respect to governance; risk management and internal controls; and third-party AI systems. The model bulletin also describes what questions to expect as part of a regulatory investigation or market conduct exam. The Innovation, Cybersecurity, and Technology (H) Committee will hear in-person comments at the NAIC’s Summer National Meeting on August 13, and written comments may be submitted through September 5. 
  • Connecticut Regulates State Use of A.I. Connecticut has passed a bill that will likely serve as a model for the state’s future legislation governing private employers’ use of Artificial Intelligence. The "Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy" mandates that the state's Department of Administrative Services (DAS) conduct an inventory of all systems using AI in any state agency. Furthermore, starting on February 1, 2024, the Act mandates that DAS carry out regular evaluations to ensure that the use of such systems does not lead to any unlawful discrimination or disparate impact. The Act took effect on July 1, 2023.
  • White House Secures Concessions from AI Companies. On July 21, 2023, leading AI companies met at the White House, where they agreed to voluntary commitments designed to address key risks associated with AI. Although short on details, the companies pledged (among other things) to: test their AI models and systems, including with respect to security concerns and bias/discrimination; invest in cybersecurity safeguards; and develop mechanisms that will enable the public to recognize when content is generated by AI, such as by using watermarks. The Biden administration also announced that it is in the process of developing an executive order and will seek to advance legislation focused on AI safety, security and trust. In recognition of the global challenges and opportunities associated with AI, the White House also highlighted its bilateral and multilateral work to develop an international framework for AI governance.
  • Fed Official Warns of Fair Lending Implications of AI: On July 18, 2023, Federal Reserve Vice Chair for Supervision Michael S. Barr gave a speech concerning adjustments to the Fed’s application of the Fair Housing Act and Equal Credit Opportunity Act to address the increasing prevalence of artificial intelligence. Barr observed that AI can help assess the creditworthiness of individuals without credit history and facilitate wider access to credit for those who may otherwise be excluded. But, he cautioned, the technology also has the potential to violate the fair lending laws and may perpetuate existing disparities and biases. He warned that the risks are amplified when a model is opaque. Finally, Barr said that the Fed is working to ensure that its supervision keeps pace with lending practices that utilize artificial intelligence and machine learning.
  • FTC Investigating OpenAI for Possible Violations of Consumer Protection Laws: The FTC reportedly sent a 20-page civil investigative demand to OpenAI to better understand the company’s technology, training data, risk assessments, testing procedures and privacy and security measures. The demand indicates that the FTC is concerned about whether OpenAI “(1) engaged in unfair or deceptive privacy or data security practices or (2) engaged in unfair or deceptive practices relating to risks of harm to consumers . . . in violation of Section 5 of the FTC Act” based on its Large Language Models. The FTC has stressed the importance of truth, fairness, equity and accountability when leveraging AI technology. This is the first significant investigation by the United States into OpenAI’s technology and practices.
  • Use of Large Language Models in Clinical Medicine:A recently published study showed that large language models can achieve about 67% accuracy on U.S. Medical Licensing Exam-style questions. This is an improvement over previous models but still far from matching performance of human clinicians.

What We’re Reading

  • The American Academy of Actuaries published an issue brief discussing data biases that actuaries may confront and their implications for the insurance industry. The issue brief recognizes that, while artificial intelligence has the potential to increase fairness, models can be impacted by biases embedded in training data. The issue brief also provides practical suggestions for mitigating the impact of such bias.

Related Topics