The FBI has released a guidance aimed at helping cybersecurity professionals and the general public identify cases of “deepfake” which adversaries may use to dissuade public opinion.
FBI released the Private Industry Notification guidance on Wednesday in partnership with the Cybersecurity and Infrastructure Security Agency (CISA).
According to the guidance, foreign actors are likely to use synthetic content including deepfakes in the coming months as part of influence campaigns and social engineering tactics.
Deepfakes or generative adversarial network techniques utilize artificial intelligence and machine learning to manipulate digital content for fraudulent activities.
FBI expects malicious actors to use deepfakes to support spearphishing techniques and Business Identity Compromise attacks designed to imitate corporate personas and authority figures.
“Currently, individuals are more likely to encounter information online whose context has been altered by malicious actors versus fraudulent, synthesized content,” the guidance states. “This trend, however, will likely change as AL and ML technologies continue to advance.’
Adversaries have used deepfake techniques to create fictitious journalists for false news items since 2017, according to the guidance.