The U.S. Artificial Intelligence Safety Institute within the National Institute of Standards and Technology has signed agreements with Anthropic and OpenAI to facilitate collaboration on AI safety research, testing and assessment.
NIST said Thursday the memorandum of understanding signed with the two companies will provide the U.S. AI Safety Institute access to new AI models before and after their public launch and will help advance research to assess the capabilities and safety risks of such tools and develop methods to mitigate risks.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute.
The institute also intends to provide the two companies with feedback on potential safety improvements to their AI models.