The Cybersecurity and Infrastructure Security Agency has worked with Australia’s cybersecurity agency to develop a document meant to guide organizations on how to use artificial intelligence systems securely.
The guidance provides an overview of AI-related threats: data poisoning; input manipulation; generative AI hallucinations; privacy and intellectual property threats; model stealing and training data exfiltration; and re-identification of anonymized data, CISA said Tuesday.
The publication offers measures that AI users can implement to manage risks associated with the use of the technology.
Some of the mitigation measures mentioned in the guidance are implementing multifactor authentication, managing privileged access to AI tools, conducting health checks of AI systems and enforcing logging and monitoring activities.
CISA and the Australian Cyber Security Centre collaborated with the FBI and the National Security Agency on the guidance titled “Engaging with Artificial Intelligence.”
Cybersecurity agencies of Canada, New Zealand, Germany, Israel, Japan, Norway, Singapore, Sweden and the U.K. also participated in developing the guidance.
Register here to attend the Potomac Officers Club’s 5th Annual Artificial Intelligence Summit on March 21 and hear federal leaders and industry experts discuss the latest developments in the field.