The American Council for Technology-Industry Advisory Council is urging federal agencies to set or reinforce existing best practices in artificial intelligence accountability to protect their AI systems against threats.
In a white paper published Wednesday, ACT-IAC reported that the U.S. government currently does not have a standard definition of AI accountability, but federal managers can use guidelines from the National Institute of Standards and Technology and the Government Accountability Office to manage AI risks.
Under Executive Order 13960, government agencies are required to implement physical, technical or administrative safeguards to ensure the proper use and functionality of their AI systems. The EO mandates them to conduct an inventory, provide justification, or retire AI systems if necessary.
In the absence of clear federal guidance, they are encouraged to define their own AI accountability landscape based on their unique mission. ACT-IAC recommended that departments explore GAO’s AI Accountability Framework and NIST’s AI Risk Management Framework.
ACT-IAC also called for the involvement of the Federal CIO Council by means of a discussion group, so that government and industry can exchange ideas on definitions and challenges in implementation.
Hear leaders from the GovCon industry and the government share their perspectives on notable AI advancements during the Potomac Officers Club’s Annual Artificial Intelligence Summit, which is scheduled to take place on Feb. 16 at Hilton-McLean in Virginia.