Hello, Guest.!
/

White House Official Says External Red-Team Assessments of LLMs Could Improve AI Developer Accountability

1 min read
White House Official Says External Red-Team Assessments of LLMs Could Improve AI Developer Accountability

External red team assessments, which use real-world cyberattack techniques to identify an organization’s security weaknesses, are effective in detecting artificial intelligence risks such as bias, discrimination, privacy and other novel threats, according to Alan Mislove, assistant director of data and democracy at the White House Office of Science and Technology Policy.

In a blog post published Tuesday, Mislove relayed findings from the first-ever red-teaming event held at the AI Village during the 2023 DEF CON hacking conference from Aug. 10 to 13 in Las Vegas.

While external red-teaming is currently being used in cybersecurity in general, its specific application to AI systems is not yet common. The AI Village event focused on the assessment of large language models, which underwent intensive testing until they produced undesirable outcomes.

The findings helped establish red-teaming norms to expose threats to data rights and safety when it comes to LLMs, Mislove said. The approach may help increase transparency and accountability among AI companies, he added.

On Sept. 12, ExecutiveBiz, an affiliate publication of ExecutiveGov, will host the Trusted AI and Autonomy Forum. The event, which will be held in person in Falls Church, Virginia, is open for registration.

Trusted AI and Autonomy Forum