Hello, Guest.!
/

Army Taps Academia to Address Hacker Risks in Facial Recognition Tech

1 min read

The U.S. Army and Duke University have partnered to address cyber risks that threaten the use of object and facial recognition in artificial intelligence training. Researchers at the university have developed a software that detects backdoor hacking attempts orchestrated against recognition systems, the Army said Tuesday. Army Research Laboratory awarded a nine-month, $60K grant for the effort.

“This work will lay the foundations for recognizing and mitigating backdoor attacks in which the data used to train the object recognition system is subtly altered to give incorrect answers,” said MaryAnne Fields, program manager for intelligent systems at the Army Research Office.

Certain visual characteristics captured by recognition sensors may corrupt data and cause incorrect labels in machine learning platforms.

This case of data corruption would lead systems to generate false predictions. For example, the system would sense a common characteristic and associate that information with only a single, specific person. Hackers may intentionally feed certain characteristics or attributes into learning systems to trigger attacks.

“Our software scans all the classes and flags those that show strong responses, indicating the high possibility that these classes have been hacked,” said Helen Li, who leads the effort with fellow faculty member Yiran Chen.

Afterward, the software locates the region that contains a visual characteristic serving as the backdoor attack’s trigger. Qiao said learning models should then undergo retraining to disregard attributes that have triggered backdoors.