Artificial Intelligence Briefing: House Launches Bipartisan AI Task Force
The House of Representatives aims to keep the U.S. at the leading edge of AI policy and innovation with a bipartisan task force while the FTC considers expanding a rule to crack down on AI impersonation. Meanwhile, a recent ruling in Canada warns businesses that they’ll be held responsible for incorrect information from their chatbots. We’re diving into these developments and more in the latest briefing.
Regulatory, Legislative and Litigation Developments
- Launch of Bipartisan AI Task Force. On February 20, the House of Representatives announced the creation of a bipartisan Task Force on AI. The Task Force, led by Representatives Jay Obernolte (R-CA) and Ted Lieu (D-CA), is tasked with producing guiding principles, forward-looking recommendations, appropriate guardrails and policy proposals to shape the future of AI and keep the U.S. at the forefront of AI innovation.
- FTC Proposes New Protections to Combat AI Impersonation. AI-generated deep fakes and voice cloning are in the FTC’s crosshairs as the agency considers expanding a rule that protects consumers from harm caused by impersonation. While the recently finalized rule focuses on impersonation of government and businesses, the FTC received comments during the rulemaking process pointing out harm caused by impersonation of individuals, specifically as powered by AI tools. In response to those comments, a supplemental notice of proposed rulemaking seeks public input into the proposed expansion of the rule to include impersonation of individuals. In addition, the FTC specifically seeks comment on whether AI platforms should be broadly liable if they know or “have reason to know” their services are being used for deception.
- FTC Asked to Investigate State Medicaid Eligibility System. National Health Law Program (NHeLP), the Electronic Privacy Information Center (EPIC) and Upturn are asking the Federal Trade Commission to investigate an automated decision-making software used by Texas and 19 other states to conduct Medicaid eligibility determinations. The organizations cite ongoing issues with the accuracy of the software’s eligibility determinations and notices generated without human review, according to the complaint. If the FTC opens an investigation, it may agree with the complainants’ contentions that certain voluntary guidelines (OECD Artificial Intelligence Principles, the Blueprint for an AI Bill of Rights, the Universal Guidelines for Artificial Intelligence, Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of AI, and the National Institute of Standards and Technology’s AI Risk Management Framework) are “established public policies” that the FTC may consider in determining whether the software represents an unfair and deceptive trade practice.
- CMS Provides New Guidance on Use of AI Tools in Utilization Management for Medicare Advantage Plans. In connection with new rules governing utilization management practices in the Medicare Advantage program that became applicable starting this year, new “frequently asked questions” guidance re-confirms that nothing in Medicare Advantage law and regulations prohibits MA plans from using an algorithm or software tool to make coverage determinations, but “it is the responsibility of the MA organization to ensure that the algorithm or artificial intelligence complies with all applicable rules for how coverage determinations by MA organizations are made.” The guidance emphasizes that a coverage determination based on medical necessity must be based on the individual patient’s circumstances. It also provides examples involving post-acute care services terminations and inpatient admission denials or downgrades to an observation stay. In so doing, CMS has stepped beyond its general guidance on the use of AI in UM contained in its April 2023 final rule and wades into two areas of current controversies surrounding the use of algorithms and artificial intelligence.
- SEC Chair Highlights AI Risks. In a recent speech, Securities and Exchange Commission Chair Gary Gensler outlined numerous AI risks to financial institutions, investors, and registrants. Chair Gensler cautioned that consolidated use of AI models could create issues of “dependencies and interconnectedness” among financial institutions while AI hallucinations and optimization could lead to errors and conflicts of interest that harm investors. Moreover, new AI tools could be exploited by bad actors and compromise investor protection, absent sufficient guardrails. Chair Gensler warned against “AI washing,” or the filing of misleading disclosures concerning a company’s AI use, capabilities or risks.
- NAIC to Resume Work on Accelerated Underwriting Regulatory Guidance. The National Association of Insurance Commissioners’ Accelerated Underwriting Working Group (AUWG) announced on February 13 that it will resume efforts to develop regulatory guidance on accelerated underwriting in life insurance. The working group had taken an extended break while work progressed on the AI/ML Life Insurance Survey and the AI Model Bulletin. Now that those projects are complete, the AUWG is re-exposing its draft Regulatory Guidance and Considerations document as well as its Referral to the Market Conduct Examination Guidelines (D) Working Group so that stakeholders can review them with the survey and bulletin in mind. Written comments are not requested, but the AUWG will schedule a meeting prior to the Spring National Meeting to discuss.
- Colorado Turns to Health Insurance. The Colorado Division of Insurance scheduled its next stakeholder session on the implementation of SB 21-169 for February 29 at 12:00 p.m. ET to kick off discussion on unfair discrimination in health insurance. The Division also intends to provide an update on the status of its life insurance and private passenger auto work. Registration is available here.
- New Hampshire Adopts AI bulletin. The New Hampshire Insurance Department has joined Alaska in adopting the NAIC's AI model bulletin, with only a few notable changes. Among other things, the New Hampshire version "strongly encourages" the development and use of testing methods to identify errors and limit the potential for unfair discrimination, eliminates the reference to board involvement, and uses the term "unfair bias analysis" rather than “bias analysis.”
- Use of AI and LLMs in Pharmacovigilance – Regulatory Considerations. During a three-day symposium this February co-organized by the U.S. Food and Drug Administration (FDA), Health Canada, and the UK’s Medicines & Healthcare products Regulatory Agency, regulators discussed application of advanced technologies in the conduct of clinical trials that support marketing applications and in pharmacovigilance activities that follow approval of medical products. One of the speakers, Deputy Director of the Office of Surveillance and Epidemiology (OSE) of FDA’s Center for Drug Evaluation and Research, described OSE’s use of large language models for processing millions of adverse event reports received annually by FDA. Currently available AI tools do not yet allow causality assessment (i.e., determining whether a particular event was caused by a given drug), but they do help with initial evaluation and categorization of Individual Case Safety Reports. The European Medicines Agency and national competent authorities also highlighted pharmacovigilance as an area for LLM application in their 2023-2028 AI Workplan.
- Company Held Liable for Inaccurate Information Provided by Its AI Chatbot to Customer. As chatbots powered by AI become more and more ubiquitous, businesses need to be mindful that their chatbots are supplying up-to-date and accurate information to users, especially with the recent ruling that a business can be held liable for information its chatbot provides customers. A Canadian Civil Resolution Tribunal ruled that Air Canada would have to honor the refund policy as stated by its AI chatbot despite providing inaccurate information to a customer. While the airline argued that “it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot,” a Tribunal member stated “it should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.