Artificial Intelligence Briefing: NAIC Adopts AI Model Bulletin
The NAIC adopts a landmark framework outlining regulatory expectations for the use of AI by insurers. Meanwhile, a California agency proposes regulations for the use of automated decisionmaking technology by businesses, and the negotiation of the EU AI Act is in its final stages. We’re diving into these developments and more in the latest briefing.
Regulatory, Legislative and Litigation Developments
- NAIC Adopts AI Model Bulletin. On December 4, the National Association of Insurance Commissioners unanimously adopted a model bulletin that sets forth regulatory expectations for insurers using artificial intelligence. The bulletin requires that such insurers adopt an AI governance and risk management framework and encourages them to test for unfair discrimination. Time will tell how many states adopt the bulletin, but make no mistake — the regulatory landscape is changing quickly for insurers that use AI. We’re here to help.
- CPPA Proposes Regulatory Framework for Automated Decisionmaking Technology. On November 27, the California Privacy Protection Agency released draft automated decisionmaking technology (ADMT) regulations pursuant to Cal. Civ. Code § 1798.185(a)(16). The regulations would target businesses that utilize ADMT — which is defined to mean “any system, software, or process — including one derived from machine-learning, statistics, or other data-processing or artificial intelligence — that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking.” Specifically, the draft regulations propose requirements for businesses that use ADMT for (i) decisions that have a significant impact on consumers (including with respect to financial or lending services, housing, insurance, education, criminal justice, employment, health care services or essential goods and services), (ii) profiling an employee, independent contractor, job applicant or student, (iii) profiling consumers in publicly accessible places, or potentially (iv) profiling consumers for behavioral advertising. The draft regulations would provide consumers with protection through pre-use notices, the ability to opt-out of a business’s use of ADMT (subject to limited exceptions), and the ability to access information about how a business uses ADMT, with possibly heightened protections when the consumer is known be under the age of 13 or 16. The Agency board was to provide feedback on the draft regulations at the December 8, 2023, board meeting, with formal rulemaking expected to begin next year.
- House Considers How AI is Changing Health Care. On November 29, the health subcommittee of the House Energy and Commerce Committee held the fourth in a series of hearings intended to set a direction for national action on privacy and other AI-related legislation. Witnesses at the hearing addressed the increasing use of AI tools in health care coverage determinations, including for Medicare Advantage prior authorizations. The committee considered allegations that algorithmic software decision-making results in higher error rates or coverage decisions that are more restrictive than Medicare rules permit. One hearing witness later commented that the tension between insurers and providers is not new, but that AI has catalyzed it because the technology is being used both to deny claims and to write the appeals of those denials. The Consumer Technology Association (“CTA”) also weighed in, urging Congress to recognize where existing regulations provide sufficient protections, avoid duplicative law-making, and focus on “guardrails and outcomes” rather than on specific technologies. The CTA also urged Congress to pass a national privacy law, writing that robust data privacy regulations are essential to foster trust and confidence in AI-enabled health care tools.
- Negotiation of EU AI Act in Final Stages. Final “Trilogue” negotiations between the EU Commission, Council and Parliament over the proposed EU AI Act (Act) were adjourned on Thursday afternoon, December 7th. Talks began on Wednesday morning, ran through the night, paused on Thursday afternoon, and resume on Friday. EU lawmakers appear to have agreed on provisional terms for regulating foundation models, with a possible exemption for open-source models unless those models are deemed high-risk or are used for prohibited purposes, according to leaked documents reported by media outlets. Sticking points are reported to include prohibitions on facial recognition systems because of obvious privacy concerns, with different views among governments within the EU on the use of AI biometric surveillance for law enforcement purposes. Final positions should emerge in the coming days, and we’ll keep you posted.
- UK AI Regulation Bill Proposes New AI Regulator. On November 22, the Artificial Intelligence (Regulation) Bill was introduced to the UK Parliament. The Bill seeks to establish a central AI authority to oversee the UK’s regulatory approach to AI. The AI Authority would coordinate among relevant regulators in respect of AI, would carry out various monitoring functions to assess risks across the economy and to identify gaps in legislation, and would promote interoperability with international regulatory frameworks.
- First Developed AI Global Agreement. On November 26, the U.S. Cybersecurity and Infrastructure Security Agency and the U.K. National Cyber Security Centre announced the release of the Guidelines for Secure AI System Development. The agreement was signed by 18 countries and is based on secure by design principles. “The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core,” said Secretary of Homeland Security Alejandro Mayorkas. “By integrating ‘secure by design’ principles, these guidelines represent an historic agreement that developers must invest in, protecting customers at each step of a system’s design and development,” he added.
Suggested Reading
- How Nations Are Losing a Global Race to Tackle A.I.’s Harms, New York Times (December 6, 2023)
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.