What is NIST AI RMF (AI 100-1) (NIST AI Risk Management Framework)?
NIST's voluntary framework for managing AI risks, published as AI 100-1 in January 2023, with the Generative AI Profile (NIST AI 600-1) added July 2024. Functions as the US analog to the EU AI Act's risk-management requirements (Article 9). Referenced by US federal AI procurement and most US-domiciled enterprise AI buyers.
Full explanation
AI RMF is organized around four core functions: GOVERN (organizational AI risk management), MAP (context, risks, and impacts), MEASURE (evaluation), MANAGE (mitigation). The Generative AI Profile adds GAI-specific risk categories: confabulation, dangerous content, data privacy, environmental impact, harmful bias, human-AI configuration, information integrity, information security, intellectual property, obscene/abusive content, value chain, and CBRN risks. Most US enterprise AI questionnaires now ask 'do you align with NIST AI RMF?' as the standard governance question.
Example
An enterprise AI buyer's security questionnaire includes: 'Describe your alignment with NIST AI RMF (AI 100-1) and the Generative AI Profile (AI 600-1).' Honest answer maps to: GOVERN (board oversight + policies), MAP (use-case classification + threat model), MEASURE (red-team + bias evaluation + AIBOM), MANAGE (Llama Guard + agent-scope + cost-firewall). Securie's audit bundle includes the NIST AI RMF mapping out of the box.
Related
FAQ
Is NIST AI RMF mandatory?
Voluntary by design. But US federal procurement (per OMB M-24-10) increasingly requires alignment, and most enterprise security questionnaires use it as the de facto governance baseline.
How does it map to EU AI Act?
AI RMF GOVERN+MAP+MEASURE+MANAGE roughly maps to Article 9 (risk management) + Article 14 (human oversight) + Article 61 (post-market monitoring). The two frameworks are compatible; teams that satisfy AI RMF can usually satisfy the EU AI Act risk-management piece with modest additional work.