Defense Advanced Research Projects Agency (DARPA) announced on Friday that the agency is seeking proposals for its Assured Neuro Symbolic Learning and Reasoning (ANSR) program to address the trustworthiness of artificial intelligence and machine learning capabilities.
“Motivating new thinking and approaches in this space will help assure that autonomous systems will operate safely and perform as intended,” said Dr. Sandeep Neema, DARPA ANSR program manager. “This will be integral to trust, which is key to the Department of Defense’s successful adoption of autonomy.”
DARPA’s ANSR program is looking to address these challenges in the form of new, hybrid (neuro-symbolic) AI algorithms that deeply integrate symbolic reasoning with data-driven learning to create robust, assured, and therefore trustworthy systems.
ANSR will explore diverse, hybrid architectures that can be seeded with prior knowledge, acquire both statistical and symbolic knowledge through learning, and adapt learned representations. The program includes demonstrations to evaluate hybrid AI techniques through relevant military use cases where assurance and autonomy are mission-critical.
DARPA has a need to develop these capabilities in our to establish data-driven machine learning transparency and robustness while also taking a different approach than the traditional process that heavily relies on knowledge representations and symbolic reasoning can be assured but are not robust to the uncertainties encountered in the real world.
Selected teams will develop a common operating picture of a dynamic, dense urban environment using a fully autonomous system equipped with ANSR technologies. The AI would deliver insights to the warfighter that could help characterize friendly, adversarial and neutral entities, the operating environment, and threat and safety corridors.