Artificial intelligence. CAISI is inviting AI experts to express interest in supporting federal AI efforts.
NIST’s CAISI is inviting AI experts to express interest in supporting federal AI security, testing and standards efforts.
/

NIST’s CAISI Seeks Partners to Advance AI Action Plan Priorities

3 mins read

The Center for AI Standards and Innovation at the National Institute of Standards and Technology is seeking artificial intelligence experts interested in working at CAISI as it expands work tied to the White House’s AI Action Plan.

NIST said Dec. 19 that CAISI will engage a broad range of AI experts through federal hiring, guest researcher arrangements and nonprofit collaborations. CAISI is scaling its operations to deliver on seventeen specific taskings under the Trump administration’s AI Action Plan. CAISI, operating as a startup, is the federal government’s primary industry-facing hub for testing and evaluating frontier AI models. 

NIST’s CAISI Seeks Partners to Advance AI Action Plan Priorities

AI is now embedded in how government and defense organizations analyze data, manage operations and make decisions. The 2026 Artificial Intelligence Summit on March 19 will bring together federal officials, defense leaders and industry practitioners to examine how AI, machine learning and automation are being applied in real mission environments. Register now to join the conversation.

How Does CAISI Work With Industry and Federal Partners?

CAISI works with AI companies on a voluntary basis to evaluate high-capability models prior to deployment. The center also collaborates with federal partners, including the intelligence community.

It is seeking experts to drive several initiatives that will define the future of AI safety and competition. Key project areas may include:

  • AI security and red-teaming: Testing AI systems for vulnerabilities like “jailbreaks” and “prompt injections.” This involves both manual and automated “red-teaming” to build stronger security guidelines.
  • Standards and guidelines: Creating voluntary resources for government and industry to ensure AI systems are robust and secure, and that evaluations are reproducible.
  • National security risk evaluations: Measuring AI capabilities in sensitive fields:
    • Cyber: Assessing AI’s ability to find and exploit vulnerabilities.
    • Biology and chemistry: Evaluating risks related to biomolecular design and chemical modeling.
  • Global AI landscape monitoring: Producing reports on the evolution of U.S. and foreign AI capabilities, including the detection of foreign political bias in models.
  • Advanced measurement science: Improving how AI performance is evaluated. This includes vetting benchmarks for accuracy and exploring “LLM-as-judge” scoring methods.

What Skills and Expertise Is CAISI Looking For?

Operating with offices in downtown San Francisco and Washington, D.C., CAISI is specifically looking for software engineers, AI research engineers and AI research scientists; cyber experts, biosecurity experts, computational and structural biologists; and researchers specializing in the measurement and validation of AI systems within operational environments.

Interested individuals may express their interest in working with CAISI through Google Forms.