Hello, Guest.!
/

ITI Publishes Recommendations on Developing NIST Trustworthy AI Standards

1 min read
ITI Publishes Recommendations on Developing NIST Trustworthy AI Standards

The Information Technology Industry Council has published recommendations on implementing the National Institute of Standards and Technology’s responsibilities to support the secure and trustworthy development and use of artificial intelligence technologies.

ITI said Friday it recommends that NIST work with international counterparts on the development of a risk management framework for generative AI systems to increase alignment of approaches and consider the roles of developers and deployers in advancing AI transparency.

The global tech trade association also suggests that the agency ensure that evaluation and auditing requirements level with the risks an AI system poses and know the difference between cybersecurity red-teaming and AI red-teaming.

“We believe that a generative AI RMF or profile, standards and guidelines for AI red-teaming and model evaluation, and a plan to engage in and advance the development of international standards are all integral to advancing the development and deployment of trustworthy AI,” ITI said in response to NIST’s request for information.

In December, NIST sought industry feedback on establishing guidelines and best practices for AI safety and security as part of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.

POC - 5th Annual Artificial Intelligence Summit

Join the Potomac Officers Club’s 5th Annual Artificial Intelligence Summit on March 21 to hear more about cutting edge AI innovations from government and industry experts. Register here.