Hello, Guest.!
DARPA Needs Input on Tech for AI System Vulnerability Assessment
//

DARPA Needs Input on Tech for AI System Vulnerability Assessment

1 min read

The Defense Advanced Research Projects Agency is requesting information from industry developers and academic institutions on existing and emerging technologies for vulnerability assessments of artificial intelligence-enabled systems.

The requirement includes techniques and tools to evaluate adversarial access threat models and potential vulnerabilities posed by AI-powered system development and deployment pipelines, DARPA said in a Friday notice posted on SAM.gov. The agency will use the collected input to identify critical gaps that must be addressed in an upcoming AI vulnerability assessment program.

Critical Areas Respondents Must Address

In particular, DARPA would like to hear from parties capable of advancing an AI red teaming framework and autonomous toolkit and executing innovative cyber methods for AI-enabled battlefield systems. Respondents that could address electronic warfare effects for manipulating AI-enabled battlefield systems and physical manufacturing of adversarial effects are also invited to send their submissions.

The effort supports DARPA’s Guaranteeing AI Robustness against Deception program, which was launched in 2019 to develop unique methods of protecting Department of Defense AI models against the threats presented by the adversarial AI research community. Responses from all capable sources, including private and public companies, individuals, universities, research centers and government-sponsored laboratories will be accepted until Feb. 28.