Hello, Guest.!
NIST Issues New Document on Mitigating AI Risks
/

NIST Issues New Document on Mitigating AI Risks

1 min read

The U.S. AI Safety Institute within the National Institute of Standards and Technology has published the second public draft of its Managing Misuse Risk for Dual-Use Foundation Models guidelines, which aims to establish best practices for the identification and mitigation of public safety and national security risks associated with artificial intelligence. 

The US AISI said Wednesday that the updated guidelines incorporate feedback from over 70 industry, academic and civil society experts from the document’s first draft issued in July 2024.

New Draft Addresses Dual-Use Foundation Model Misuse Risks

The second draft of the guidelines adds information to support open model developers and identifies vulnerabilities across the AI supply chain. According to the federal group, although the target audience of the guidelines remains to be model developers, the US AISI expanded the document’s scope to include different players within the supply chain. The guidelines now include resources and risk management practices for all aspects of the AI supply chain. 

The updated document also comes with new appendices for measuring and managing chemical, biological and cyber misuse risks. 

In addition, US AISI made clarifications on the meaning and importance of a marginal risk framework for evaluating the potential impact of foundational models. 

Industry stakeholders and experts are encouraged to read the guidelines and submit their comments by March 15.