NIST Makes Good on Biden’s Executive Order on AI, Delivering Algorithm-Testing Software and Multiple AI-Related Guidance Documents to the Public
At a Glance
- NIST announced that it had developed and made publicly available open-source software, nicknamed “Dioptra,” which allows users to test how their AI models and systems respond to adversarial attacks.
- The U.S. AI Safety Institute published its initial public draft of guidance to address one key area of concern in the AI industry: misuse of AI systems and dual-use foundation models for nefarious and harmful purposes. NIST is accepting public comments on this draft guidance until September 9, 2024.
- NIST also issued final versions of three previously published guidance documents:
- “AI Risk Management Framework Generative AI Profile”
- “Second Software Development Practices for Generative AI and Dual-Use Foundation Models”
- “A Plan for Global Engagement on AI Standards”
It’s been a busy summer for the National Institute of Standards and Technology (NIST) and the U.S. AI Safety Institute (itself housed within NIST). July 26, 2024, marked the 270th day since President Biden issued his executive order on AI; and on the same day, NIST and the AI Safety Institute announced not one, not two, not three, but five AI-related deliverables for AI enthusiasts and users to chew on.
Chief among those five developments was NIST’s announcement that it had developed and made publicly available open-source software, nicknamed “Dioptra,” which allows users to test how their AI models and systems respond to adversarial attacks. The software, which is free to download, responds to the Executive Order’s directive that NIST help with model testing that permits users to learn how often and under what circumstances their AI systems and models might fail. The software helps supports NIST’s already-existing AI Risk Management Framework by providing a functional option for assessing, analyzing and tracking AI risks, and allows for model testing and red teaming throughout the development lifecycle, during acquisition of AI models, and during auditing or compliance activities.
Also for the first time, the U.S. AI Safety Institute published its initial public draft of guidance to address one key area of concern in the AI industry: misuse of AI systems and dual-use foundation models for nefarious and harmful purposes. The draft guidance, which includes seven key objectives for mitigating misuse, provides best practices for foundation model developers to protect their AI systems from being misused in a way that might cause harm to individuals or society more broadly. NIST is accepting public comments on this draft guidance until September 9, 2024.
In addition to these two new deliverables, NIST also issued final versions of three previously published guidance documents. The first, “AI Risk Management Framework Generative AI Profile,” may be used to assist organizations in identifying risks specific to generative AI and practical solutions for management of generative AI risks. The second, “Second Software Development Practices for Generative AI and Dual-Use Foundation Models,” is designed to be a companion to NIST’s Secure Software Development Framework and addresses the risks of AI systems being compromised by malicious training data, or data that could poison, bias or tamper with a model’s training set. Finally, NIST published its final guidance for “A Plan for Global Engagement on AI Standards,” which suggests that development of AI-related standards should involve a broad range of multidisciplinary stakeholders from many different countries to achieve consensus on standards and enable information sharing.
For More Information
Companies utilizing AI, whether internally, externally or in coordination with their vendors, will want to continue to watch this space. While the five deliverables noted above offer plenty to digest for now, we’re likely to see much more in this space as we near the one-year anniversary of President Biden’s Executive Order on AI (and the associated deadlines imposed by that Executive Order). Follow Faegre Drinker’s AI-X team to stay up-to-date on these and other AI-related updates.