Hello, Guest.!
//

NIST Report Calls on Developers to Address Human, Systemic Biases in AI

2 mins read

A National Institute of Standards and Technology report suggests that teams working on artificial intelligence platforms and looking to manage the harmful effects of AI bias should not only address data, machine learning processes and other computational factors but also systemic and human biases in AI.

“If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI,” Reva Schwartz, principal investigator for AI bias and one of the authors of the NIST report, said in a statement published Wednesday.

Authors of the NIST Special Publication 1270 Towards a Standard for Identifying and Managing Bias in Artificial Intelligence discussed how human and systemic biases can harm individuals, including the potential discrimination of people based on their race.

NIST researchers proposed the adoption of a “socio-technical” approach to help address bias in AI.

“Organizations often default to overly technical solutions for AI bias issues,” Schwartz said. “But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates.”

NIST will host a three-day public workshop to seek insights as it works on the AI Risk Management Framework and address issues with regard to harmful bias in AI. The workshop will kick off on March 29.

POC - 4th Annual Artificial Intelligence Summit

The Potomac Officers Club will host the 4th Annual Artificial Intelligence Summit this spring. Visit the POC Events page to sign up for the upcoming forum and view POC’s full calendar.