What is the NIST AI Risk Management Framework
The NIST AI Risk Management Framework (AI RMF, formally NIST AI 100-1) is a voluntary framework published by the National Institute of Standards and Technology in January 2023. It provides organizations with a structured approach to identifying, assessing, and managing the risks that AI systems introduce. NIST released supplementary guidance through 2024 and 2025, including the Generative AI Profile (AI 600-1) in July 2024, which addresses risks specific to generative AI systems.
For US federal agencies, adoption is not voluntary. Executive orders and OMB directives require federal agencies to apply the AI RMF when deploying AI systems. For enterprise organizations, the framework serves as a governance standard that compliance teams, auditors, and risk committees increasingly reference when evaluating AI deployments.
The framework is not a checklist. It does not prescribe specific technical controls or certifications. Instead, it defines four functions (GOVERN, MAP, MEASURE, MANAGE) that organize the activities needed to manage AI risk throughout a system’s lifecycle. You decide how to implement each function based on your context, your risk tolerance, and the specific AI systems you deploy.
In January 2026, the Federal Register published a request for information on AI agent security, signaling increased regulatory attention to the specific risks that AI agents create when they take actions across multiple systems. The NIST AI RMF provides the structure to address those risks before prescriptive regulation arrives.
The four functions
The NIST AI RMF organizes risk management into four core functions. Here is what each one covers and why it matters for AI agent deployments.
GOVERN
GOVERN establishes the policies, processes, roles, and culture needed to manage AI risk. It is the foundation. Without GOVERN, risk management happens in isolation, and individual teams make ad hoc decisions without consistent standards.
For AI agents, GOVERN answers: Who approves agent deployments? What policies define acceptable agent behavior? Who is accountable when an agent causes an incident? GOVERN also covers data governance. When your agents connect to Slack, GitHub, Notion, and other tools, data flows across system boundaries. You need policies for how data moves, who controls model API keys, and what third-party dependencies your agents rely on.
MAP
MAP identifies the context, scope, and potential impacts of your AI systems. It is where you characterize risks before you try to measure or manage them. What does this agent do? Who are the stakeholders? What could go wrong?
For AI agents, MAP is critical because agents act on behalf of your team. A documentation assistant that summarizes meeting notes has a different risk profile than a support triage agent that routes customer requests and accesses customer data. MAP forces you to articulate those differences explicitly: catalog every agent, its integrations, and the data it touches.
MEASURE
MEASURE assesses the risks identified in MAP. It covers testing, monitoring, and evaluation throughout the system’s lifecycle, not just at deployment, but on an ongoing basis.
For AI agents, MEASURE answers: How do you know an agent is performing correctly? What metrics indicate risk? How do you detect when behavior deviates from expectations? Audit trails are the technical foundation. Without complete logs, you cannot evaluate performance, investigate incidents, or demonstrate compliance.
MANAGE
MANAGE is where you act on what GOVERN, MAP, and MEASURE produce. It covers the controls, mitigations, and response plans that reduce AI risk to acceptable levels. What controls prevent identified risks? How do you respond when something goes wrong?
For AI agents, MANAGE includes technical controls (container isolation, scoped permissions), operational controls (monitoring, escalation procedures), and organizational controls (training, accountability). The ability to immediately stop any agent (a kill switch) is a core MANAGE capability.
Why the NIST AI RMF matters for AI agent platforms
Traditional AI risk frameworks were designed for predictive models, systems that take inputs, produce outputs, and do not take actions in the world. AI agents are different. They read data from your tools, make decisions based on that data, and take actions across multiple systems. An agent might read a support ticket in Slack, look up the customer’s account in your CRM, draft a response, and post it back to the channel. Each step crosses a system boundary. Each step creates a new risk surface.
The NIST AI RMF does not prescribe specific controls for AI agents, but its four functions map well to the risks agents create. GOVERN addresses the policy gap that lets agents proliferate without oversight. MAP forces you to understand each agent’s scope before deployment. MEASURE requires ongoing monitoring rather than one-time assessments. MANAGE ensures you have technical and operational controls in place.
As regulatory attention increases (the January 2026 Federal Register RFI on AI agent security is one signal among several), organizations that have already structured their agent governance around the NIST AI RMF will be in a stronger position than those that have to retrofit controls after the fact.
How AI agents create new risk categories
AI agents introduce risk categories that traditional AI risk assessments do not cover well. If you are applying the NIST AI RMF to an AI agent platform, you need to account for these.
Tool access and cross-system actions
An agent with access to multiple tools can take actions across system boundaries. A workflow agent might read from Notion, write to GitHub, and post to Slack in a single workflow. Each tool connection is a risk surface. Each cross-system action creates a data flow that needs to be documented, monitored, and controlled.
The risk is not just data leakage. It is unintended action. An agent that misinterprets a request and creates a GitHub issue from confidential Slack messages has not just leaked data. It has taken an action in a production system that may be difficult to reverse.
Data flow across integrations
When agents connect to your tools, data flows between systems that may have different access controls, retention policies, and compliance requirements. The data that an agent reads from your CRM may be subject to different governance rules than the channel where it posts a summary.
GOVERN and MAP require you to document these flows. MEASURE requires you to monitor them. MANAGE requires you to control them through scoped permissions and access policies.
Multi-agent coordination
When multiple agents operate within the same organization, they can interact with each other. An orchestrator agent may delegate tasks to specialist agents. If agent-to-agent communication is not controlled, one compromised or misconfigured agent can affect others.
Agent scoping (defining whether an agent is private, team-scoped, or organization-scoped) is a MANAGE control that limits agent-to-agent interaction to intended boundaries.
Mapping AI agent controls to the NIST AI RMF
Here is how specific AI agent platform controls map to the four NIST AI RMF functions:
| NIST AI RMF Function | AI Agent Control | What It Addresses |
|---|---|---|
| GOVERN | BYOK model API key management | Data flow governance: your keys, your provider relationship |
| GOVERN | Agent deployment approval process | Policy enforcement for who can deploy agents and when |
| GOVERN | Agent scoping (private/team/org) | Access governance: who can interact with each agent |
| MAP | Per-agent integration catalog | Documents which tools each agent accesses and why |
| MAP | Risk classification per agent | Characterizes the risk profile of each agent’s use case |
| MEASURE | Audit trail logging | Records every action for performance assessment and compliance |
| MEASURE | Team feedback and review | Qualitative assessment of agent output accuracy |
| MANAGE | Container isolation (ClawCage) | Contains blast radius of agent failures or compromise |
| MANAGE | Scoped permissions per agent | Enforces least-privilege access to tools and data |
| MANAGE | Agent pause and reconfiguration | Response controls for stopping or modifying agent behavior |
How ClawStaff supports NIST AI RMF compliance
ClawStaff is not a NIST AI RMF compliance tool. The framework is about organizational governance, not product features. But your platform’s architecture determines how much work each function requires, and whether your controls are structural or aspirational.
GOVERN: BYOK and data flow control. With BYOK, your AI model API keys stay under your control. Data flows from your agent’s container directly to your model provider. You define the provider relationship. You manage the keys. Data flow governance is enforced by architecture, not a policy document in a wiki.
GOVERN: Access controls and agent scoping. Scoped permissions define what each agent can access. Agent scoping (private, team, organization) defines who can interact with each agent. These are GOVERN controls at the platform level.
MAP: Per-agent configuration. Every agent has a defined configuration: which tools it connects to, what actions it can perform, what scope it operates in. When you need to catalog agents and their risk surfaces, the configuration is already documented.
MEASURE: Audit trail. The audit trail logs every agent action: tool access, data reads, outputs, errors, and configuration changes. This is the data layer that MEASURE requires. You can assess agent performance, detect anomalies, investigate incidents, and demonstrate compliance with evidence.
MANAGE: ClawCage container isolation. ClawCage runs each organization’s agents in an isolated container with dedicated resources, storage, and network boundaries. If an agent is misconfigured or behaves unexpectedly, the impact is bounded to its container.
MANAGE: Scoped permissions and kill switches. Every agent starts with zero access. Each permission is explicitly granted. Agents can be paused or stopped immediately. These controls enforce least-privilege access and provide response capabilities when MEASURE detects a problem.
Action checklist for teams
If you are applying the NIST AI RMF to your AI agent deployment, here is a concrete starting point.
GOVERN
- Designate an owner for AI agent governance within your organization
- Document your AI agent deployment approval process
- Define data flow policies for cross-system agent actions
- Establish API key management policies (who provisions keys, how they are rotated, where they are stored)
- Set a review cadence for AI policies (quarterly at minimum)
MAP
- Catalog every deployed agent with its purpose, integrations, and data access
- Classify each agent by risk level based on the data it accesses and the actions it takes
- Document stakeholders affected by each agent
- Identify agents that access regulated data (PII, financial, health)
MEASURE
- Enable audit trail logging for all agents
- Define performance metrics for each agent’s primary use case
- Schedule regular reviews of agent outputs and behavior
- Collect and review team feedback on agent accuracy and appropriateness
- Conduct periodic risk reassessments as agent use cases expand
MANAGE
- Configure scoped permissions following least-privilege principles
- Verify container isolation is active for your organization’s agents
- Document incident response procedures for agent-related events
- Test kill switches: confirm you can stop any agent immediately
- Review and update access controls when team members join or leave
The NIST AI RMF is not a one-time assessment. It is a lifecycle approach. As you deploy more agents, connect more tools, and expand use cases, cycle through the four functions regularly. The framework scales with your deployment. The same structure that governs three agents governs thirty.
For related compliance guidance, see our SOC 2 mapping for AI agents, EU AI Act overview, and AI Governance Framework.