The Department of Commerce’s National Institute of Standards and Technology has issued three final guidance documents and a software package in line with its implementation of the October 2023 executive order on artificial intelligence.
The department said Friday NIST’s U.S. AI Safety Institute has released initial draft guidance that provides seven key approaches to mitigating misuse risks for dual-use foundation models.
Public comments on the draft document are due Sept. 9.
Of the three final guidance documents, two are designed to help developers manage the risks of generative AI and serve as companion resources to NIST’s Secure Software Development Framework and AI Risk Management Framework.
The third guidance document presents a plan for stakeholders worldwide to work together to develop and implement AI standards.
The software package seeks to help AI developers measure how adversarial attacks can impact an AI system’s performance.
“AI is the defining technology of our generation, so we are running fast to keep pace and help ensure the safe development and deployment of AI. Today’s announcements demonstrate our commitment to giving AI developers, deployers, and users the tools they need to safely harness the potential of AI, while minimizing its associated risks,” said Commerce Secretary Gina Raimondo.
USPTO’s AI Subject Matter Eligibility Guidance
In mid-July, the U.S. Patent and Trademark Office unveiled updated guidance to help USPTO staff and stakeholders determine patent subject matter eligibility for AI-related inventions and other innovations in critical and emerging technologies.
USPTO is soliciting public input on the guidance update through Sept. 16.