March 27, 2023

Artificial Intelligence Briefing: Senate Committee Hears Testimony on Algorithmic Transparency, Accountability

Our latest briefing dives into the Senate’s exploration of transparency and accountability to prevent bias in algorithmic decision systems, new guidance from the U.S. Copyright Office regarding works including content generated with AI technology, and a warning from the FTC for companies who may be overpromising what their AI products or services can deliver.

Regulatory and Legislative Developments

  • Senate Homeland Security and Governmental Affairs Committee Hears AI Risks and Opportunities: On March 9, the Senate Committee on Homeland Security and Governmental Affairs held a hearing entitled “Artificial Intelligence: Risks and Opportunities.” Committee Chairman Gary Peters highlighted that one of the greatest challenges presented by artificial intelligence is the lack of transparency and accountability in how algorithms reach results and that AI models can produce biased results that have unintended, harmful consequences. Additionally, Chairman Peters noted that building more transparency and accountability into these systems will help prevent bias that could undermine the utility of AI. The committee heard testimony from witnesses from the Center for Democracy and Technology, Brown University and the Rand Corporation. Chairman Peters ended the hearing noting that there will be more hearings to dig deeper into this subject matter.
  • New York Assembly Bill Regulates Algorithmic Decision Systems: On March 7, the New York Assembly introduced New York Assembly Bill 5309. This bill would require that if state units purchase a product or service that is or contains an algorithmic decision system, then the system shall adhere to a responsible artificial intelligence standard. The bill also clarifies that an act that is considered an unlawful discriminatory practice is prohibited when performed through an algorithmic decision system relating to interns, non-employees and creditors.
  • Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence. On March 16, the United States Copyright Office issued a statement of policy that seeks to provide clarity around its application of the human authorship requirement in evaluating and registering works that contain material generated by the use of artificial intelligence technology. Specifically, the Copyright Office notes that “if a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it.” So, “for example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user” and the work will not be registered.
  • Congressional Research Service Publishes Key Copyright Issues on Generative AI: On February 24, the Congressional Research Office published a Legal Sidebar concerning Generative AI and copyright issues. The Sidebar explores questions that courts and the U.S. Copyright Office are beginning to confront concerning the outputs of generative AI programs (e.g., ChatGPT, Stable Diffusion, etc.), such as: whether AI outputs are entitled to copyright protection; who owns the copyright to AI outputs; and how training and using generative AI programs may infringe copyright in other works. The Sidebar invites Congress to consider whether any of the questions raised by generative AI programs require amendments to the Copyright Act or other legislation.
  • Discussion Paper: Artificial Intelligence in Drug Manufacturing, Notice: On March 1, the Food and Drug Administration issued a Discussion Paper concerning the use of AI in drug manufacturing. Public comments on the paper are due May 1, 2023. The paper reiterates the FDA’s commitment to supporting the development of advanced drug manufacturing approaches and recognition that the existing regulatory framework may need to evolve to enable timely adoption of those technologies. The paper invites feedback on several areas that may impact development and production, including whether and how the application of AI in specific areas of pharmaceutical manufacturing is subject to regulatory oversight, potential content of standards for developing and validating AI models used for process control and support of release testing, and change management and lifecycle frameworks and expectations for continuously learning AI systems.
  • FTC to Businesses: Keep Your AI Claims in Check. On February 27, the FTC published a blog urging companies to think twice about their claims concerning “new tools and devices that supposedly reflect the abilities and benefits of artificial intelligence (AI).” The FTC expressed concern that some companies may be overpromising, and therefore misleading consumers, about what their purported AI products or services can deliver. The FTC warned businesses that their claims about the capabilities of their AI-enabled products or services must be supported by evidence and that companies should avoid representing their product or service is AI-enabled if it is not, stating that “false or unsubstantiated claims about a product’s efficacy are [the FTC’s] bread and butter.”
  • MHRA: Large Language Models and Software as a Medical Device. The U.K.’s Medicines and Healthcare products Regulatory Agency (MHRA) published a blog stating that large language models (LLMs) like ChatGPT and Bard may be regulated as medical devices when used in medical treatment. The MHRA acknowledged that while “it may be difficult for LLM-based medical devices to comply with medical device requirements, they are not exempt from them.” The MHRA also clarified that LLMs that are not marketed for use for a medical purpose and that are “directed towards general purposes” are unlikely to be regulated as medical devices.
  • Chamber of Commerce Issues AI Commission Report: The U.S. Chamber of Commerce’s Commission on Artificial Intelligence Competitiveness, Inclusion and Innovation released a report that calls for a risk-based regulatory framework to allow for responsible and ethical use of artificial intelligence. The report states that policies to support the responsible use of AI must be a top priority and that “a failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.” Calling for new regulation is a departure from the Chamber’s usual playbook.
  • California AI Bill Draws on White House Blueprint and NIST Framework: California AB 331 would impose new obligations on developers and deployers of automated decision tools used in a long list of contexts. Among other things, the bill would require impact assessments, governance programs and notice to natural persons when automated tools are used to make consequential decisions. It would prohibit deployers from using an automated decision tool in a manner that contributes to algorithmic discrimination, authorize fines and create a private right of action.
  • NAIC Continues Focus on AI: The National Association of Insurance Commissioners continued its focus on AI-related matters during its Spring National Meeting in Louisville. The Big Data and AI Working Group provided an update regarding the AI/ML survey of life insurers, indicating that formal examination call letters would be issued at the end of April, with responses due on May 31. The Innovation, Cybersecurity and Technology Committee announced that regulators are in discussions with subject matter experts regarding the creation of an independent data set that could be used by insurers and regulators to test algorithms for unfair discrimination.The Committee also gave an update on the development of a model bulletin that will provide regulatory guidance on the use of Big Data and AI by insurers.

What We’re Watching

Related Topics