Hello, Guest.!
/

NIST Requests Industry Feedback on AI Security Standards Development

1 min read
NIST Requests Industry Feedback on AI Security Standards Development

The National Institute of Standards and Technology is requesting industry feedback on the implementation of its responsibilities to support the secure and trustworthy development and use of artificial intelligence technologies.

NIST said Tuesday it is seeking comments on establishing guidelines and best practices for AI safety and security as part of an executive order published in October.

The EO mandates NIST to develop guidelines for AI evaluation and red-teaming, facilitate the development of consensus-based standards and provide testing environments for assessing AI systems. 

The NIST guidelines and infrastructure intend to the AI community to ensure the development and deployment of safe, secure and trustworthy AI systems.

The agency calls for information related to AI red-teaming, generative AI risk management, synthetic content risk reduction and responsible AI development standards.

“I want to invite the broader AI community to engage with our talented and dedicated team through this request for information to advance the measurement and practice of AI safety and trust,” said Laurie Locascio, undersecretary of commerce for standards and technology and director of NIST.

POC - 5th Annual Artificial Intelligence Summit

Join the Potomac Officers Club’s 5th Annual Artificial Intelligence Summit on March 21 to hear more about cutting edge AI innovations from government and industry experts. Click here to register!