最新
October 31, 2023
Artificial Intelligence Briefing: Sweeping EO “Establishes New Standards for AI Safety and Security”
President Biden has signed an executive order aimed at increasing AI security, protecting Americans’ privacy, and advancing equity and civil rights, among other goals. Meanwhile, the Senate held a second AI Insight Forum, and new developments are announced in Europe and the United Nations. We’re exploring these topics and more in the latest briefing.
Regulatory, Legislative and Litigation Developments
- Biden Signs AI Executive Order. On October 30, President Biden signed a sweeping executive order that, according to the White House fact sheet, “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” Among other things, the Executive Order requires that any company developing a foundation model that poses a serious risk to national security, national economic security, or national public health and safety must (a) notify the federal government when training the model, (b) conduct a red-team safety test in accordance with standards to be adopted by NIST, and (c) share the test results with the federal government before releasing the model to the public. The Executive Order also directs the Department of Justice and federal civil rights offices to address algorithmic discrimination by investigating and prosecuting civil rights violations relating to AI.
- Senate Holds Second AI Insight Forum. On October 24, Senate Majority Leader Chuck Schumer (D-NY) hosted the Senate's second AI Insight Forum. The closed-door session reportedly focused on innovation, including the importance of federal funding to maintain American leadership on AI. Schumer's opening and closing statements noted the importance of promoting transformational innovations that can "help create new vistas, unlock new cures, improve education, reinforce national security, protect the global food supply, and more," and sustainable innovations that can "solve the deep challenges of AI — like increasing transparency and security, and reducing bias and risk — and support effective guardrails that minimize the risks of AI and maximize the benefits to all of us." Additional closed-door sessions were scheduled for November 1.
- AI Developments in Europe. The EU AI Act is entering its final stages of discussion, although it is not clear whether all open issues can be resolved before the end of 2023. The trilogue negotiations between the three main EU bodies, the European Parliament, Council and Commission, reportedly has produced progress on key issues such as the categories of AI systems that will be designated as high risk and subject to greater regulation. However, a number of open issues remain, including the details regarding regulation of foundation models and use of AI for law enforcement, which will now be discussed at the next session in early December. In the meantime, the UK will host a global summit for governments in early November, leading AI companies and researchers on the safe development and use of AI technology and assess how the risk of AI can be mitigated through coordinated international action.
- China Launches Global AI Governance Initiative. During the third Belt and Road Forum in Beijing, which was attended by representatives from 130 countries, China announced a Global AI Governance Initiative “to prevent risks, and develop AI governance frameworks, norms and standards based on broad consensus, so as to make AI technologies more secure, reliable, controllable, and equitable.” While details on the initiative are limited, China’s proposed approach will likely differ from the EU’s AI Act and U.S. efforts to regulate AI.
- UN Tackles AI: On October 26, the Secretary-General of the United Nations announced the formation of a new AI Advisory Body that will support the international community’s efforts to govern artificial intelligence. The advisory body's 38 initial members come from government, the private sector and civil society. Immediate tasks include "building a global scientific consensus on risks and challenges, helping harness AI for the Sustainable Development Goals, and strengthening international cooperation on AI governance."
- Guiding Principles for Predetermined Change Control Plans for ML-Enabled Medical Devices. Products utilizing AI/ML as part of their functioning are inherently changing over time. In the highly regulated health care space, this beneficial ability conflicts with the traditional regulatory control of medicines and medical devices, whereby any change to an approved product must be first evaluated, and sometimes pre-approved by a regulatory agency, before implementation. To help resolve this dilemma, the U.S. Food and Drug Administration’s Center for Devices and Radiological Health (CDRH), in collaboration with Health Canada and the U.K.’s Medicines and Healthcare products Regulatory Agency (MHRA) jointly published a set of guiding principles to be used by manufacturers of medical devices that utilize ML. A key concept that enables the bridging of existing regulatory frameworks with the novel AI/ML environment is a “Predetermined Change Control Plan (PCCP).” Device manufacturers can describe in their PCCPs certain planned modifications to a device, the protocol for implementing and controlling those modifications, and the assessment of impacts from modifications. The recently published Guiding Principles explain that PCCPs must be:
- Focused and Bounded (specific and limited to intended use).
- Risk-Based (applying risk identification, evaluation, detection and mitigation strategies throughout the product’s lifecycle).
- Evidence-Based (ensuring that device performance, safety and effectiveness are appropriately measured and demonstrated).
- Transparent (for example, providing timely notifications to users and other stakeholders about any deviations in device performance).
- Have a Total Product Lifecycle Perspective (including device safety monitoring/reporting and promptly responding to safety concerns, from device conception through development and commercialization to use).
- Insurance Regulators Revise AI Guidance. The National Association of Insurance Commissioners is soliciting comments through November 6 on its revised draft AI model bulletin. The revised draft features tighter definitions, more reasonable expectations for insurers that purchase data or AI from third-party vendors, greater guidance with respect to transparency, and stronger encouragement with respect to testing for unfair discrimination. Our understanding is that the NAIC’s Innovation, Cybersecurity and Technology Committee will likely schedule a meeting to discuss the revised bulletin the week of November 13, in hopes of finalizing the guidance by year-end. Meanwhile, the Colorado Division of Insurance is reviewing the comments submitted in connection with its draft algorithmic testing regulation.
- Rapper Claims His Lawyer's Use of AI Lost His Case. In a criminal conspiracy trial involving Fugees rapper Prakazrel “Pras” Michel, Michel’s (now former) defense attorney controversially employed a generative AI program from EyeLevel.AI to craft his closing arguments. In April 2023, a federal jury found Michel guilty on all counts. A motion filed by Michel’s new counsel seeks a new trial on numerous grounds, including an allegation that the AI-assisted closing argument presented frivolous points, muddled various schemes and failed to address critical flaws in the prosecution’s case. Michel has also accused his former counsel of concealing financial ties to EyeLevel.AI.