The National Institute of Standards and Technology has launched a new program aimed at assessing the societal impacts and risks of artificial intelligence tools.
NIST said Tuesday the ARIA program, which stands for Assessing Risks and Impacts of AI, will help develop a set of metrics and methodologies to quantify how AI systems function within societal contexts once fielded.
Results from the ARIA program will help inform the U.S. AI Safety Institute’s testing efforts to establish the foundation for the development of trustworthy and secure AI systems.
“The ARIA program is designed to meet real-world needs as the use of AI technology grows,” said Laurie Locascio, director of NIST and undersecretary of Commerce for standards and technology. “This new effort will support the U.S. AI Safety Institute, expand NIST’s already broad engagement with the research community, and help establish reliable methods for testing and evaluating AI’s functionality in the real world.”
NIST expects the program to help operationalize the risk measurement function of the AI Risk Management Framework.
“ARIA will consider AI beyond the model and assess systems in context, including what happens when people interact with AI technology in realistic settings under regular use. This gives a broader, more holistic view of the net effects of these technologies,” said Reva Schwartz, head of the ARIA program at the NIST Information Technology Lab.