The Department of Homeland Security Office of Inspector General released on Jan. 30 the results of an audit it carried out concerning DHS’ use of artificial intelligence.
Table of Contents
Purpose of the Audit
The audit sought to determine whether and to what extent the agency had developed and put into effect policies governing the use of AI, whose adoption by the agency increased from 2022 to 2023, DHS OIG said.
More Action Required
The audit found that DHS took multiple steps to ensure the responsible and ethical use of AI. These steps include the issuance of guidance specific to AI, the appointment of a chief AI officer and the establishment of an AI Task Force and working groups, whose function would be to guide the agency’s efforts in AI. The OIG nevertheless deemed these measures insufficient, noting that “more action is needed to ensure DHS has appropriate governance for responsible and secure use of AI.”
The agency was also found to have established an AI strategy but not a plan to implement it. The agency also lacked the resources to ensure that its AI was being used in compliance with privacy, civil rights and civil liberties requirements.
Additional issues include the insufficiency of the data being collected by the agency to track and report its use of AI, and the insufficiency of the evidence the agency and its components have managed to collect to demonstrate that their use of AI align with the requirements of the federal government.
‘Appropriate, Ongoing Governance’ of AI
“Without appropriate, ongoing governance of its AI, DHS faces an increased risk that its AI efforts will infringe upon the safety and rights of the American people,” DHS OIG said.
Twenty recommendations were offered to correct the issues. DHS concurred with all of them.
Register now to attend the Potomac Officers Club’s 2025 AI Summit. The event will offer participants strategic insights and actionable takeaways on how to best harness the benefits of artificial intelligence.