Hello, Guest.!
/

NIST Aims to Inform AI Developers on Attack Types, Mitigation Strategies With New Report

1 min read
NIST Aims to Inform AI Developers on Attack Types, Mitigation Strategies With New Report

The National Institute of Standards and Technology has released a report providing artificial intelligence developers and users an overview of potential attacks on AI tools and detailing the current approaches they could use to mitigate these risks.

Titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, the report describes evasion, poisoning, privacy and abuse attacks that could impact AI systems and sorts them into multiple criteria, including the attacker’s knowledge, goals and objectives and capabilities, NIST said Thursday.

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said Alina Oprea, one of the report’s co-authors and a professor at Northeastern University.

“Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set,” she added.

Mitigation approaches for poisoning attacks include sanitization data, modifying the machine learning training algorithm and conducting robust training instead of regular training.

The report is part of NIST’s work to advance the development of trustworthy AI.

POC - 5th Annual Artificial Intelligence Summit

Register here to attend the Potomac Officers Club’s 5th Annual Artificial Intelligence Summit on March 21 and hear federal leaders and industry experts discuss the latest developments in the field.