The National Institute of Standards and Technology (NIST) is seeking comments from the public on a concept paper for the Artificial Intelligence Risk Management Framework, which is aimed at addressing risks in the design, development and use of AI systems.
The concept paper describes the fundamental approach proposed for the framework and incorporates feedback gathered from a request for information released in July and discussions from a workshop held in October, NIST said Tuesday.
The agency wants input on the approach and suggestions about details and specific topics reviewers would like to see in the first draft of the framework, which NIST expects to release in early 2022 for public consultation.
“The framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses,” according to the previous RFI published in the Federal Register.
NIST plans to unveil the first version of the framework in early 2023.