Hello, Guest.!
/

NIST Seeks Public Comments on Proposed Model for AI User Trust

2 mins read

The National Institute of Standards and Technology (NIST) has published a draft document outlining a list of nine factors that contribute to an individual’s potential trust in an artificial intelligence platform. 

The draft document titled “Artificial Intelligence and User Trust” seeks to show how a human may consider the factors based on the task and the risk involved in trusting the decision of an AI system and contributes to NIST’s efforts to advance the development of trustworthy AI tools, NIST said Wednesday.

“Many factors get incorporated into our decisions about trust,” said Brian Stanton, a psychologist who co-authored the draft document with NIST computer scientist Ted Jensen.. “It’s how the user thinks and feels about the system and perceives the risks involved in using it.”

Those listed factors are accuracy, reliability, resiliency, security, explainability, safety, accountability and privacy. The publication looks at trust’s integral role in human history and trust challenges related to AI and presents a comparison of user trust scenarios between a music selection algorithm and an AI that helps with medical diagnosis.

“We are proposing a model for AI user trust,” said Stanton. “It is all based on others’ research and the fundamental principles of cognition. For that reason, we would like feedback about work the scientific community might pursue to provide experimental validation of these ideas.”

Public comments on the draft publication are due July 30th.

AI: Innovation in National Security Forum

If you’re interested in AI and its role in the national security landscape, then check out GovCon Wire’s AI: Innovation in National Security Forum coming up on June 3rd. To register for this virtual forum and view other upcoming events, visit the GovConWire Events page.