← Back to Learn
complianceexplainerbest-practices

NIST AI Risk Management Framework and AI agents

Authensor

The NIST AI Risk Management Framework (AI RMF 1.0) provides a voluntary framework for managing AI risks. It is organized around four core functions: Govern, Map, Measure, and Manage. For teams deploying AI agents, the framework provides a structured approach to identifying and controlling agent-specific risks.

The four functions

Govern

Establish policies, processes, and accountability for AI risk management.

For AI agents, this means:

  • Defining who owns the agent's safety policies
  • Establishing a process for reviewing and updating policies
  • Assigning accountability for agent incidents
  • Documenting the agent's intended use and boundaries

Map

Understand and document the AI system's context, capabilities, and risks.

For AI agents:

  • Catalog all tools the agent can access
  • Identify what data the agent can reach
  • Map the agent's interactions with other systems and agents
  • Document known risks (prompt injection, tool misuse, privilege escalation)
  • Identify who is affected by the agent's actions

Measure

Monitor and evaluate the AI system's performance and risk.

For AI agents:

  • Track policy enforcement metrics (allow, block, escalate rates)
  • Monitor behavioral patterns with Sentinel
  • Run periodic red team exercises
  • Measure false positive and false negative rates for content scanning
  • Evaluate whether approval workflows are functioning effectively

Manage

Take action to address identified risks.

For AI agents:

  • Implement policy engine with YAML rules
  • Deploy content scanning with Aegis
  • Enable behavioral monitoring with Sentinel
  • Set up approval workflows for high-risk actions
  • Maintain hash-chained audit trails
  • Establish incident response procedures

NIST AI RMF and Authensor

| NIST Function | Authensor Component | |---------------|-------------------| | Govern | Policy files (documented, versioned, reviewable) | | Map | Policy rules enumerate tools and access patterns | | Measure | Sentinel metrics, receipt analytics | | Manage | Policy enforcement, Aegis scanning, approval workflows |

Characteristics of trustworthy AI

The NIST AI RMF identifies characteristics of trustworthy AI systems:

  • Valid and reliable: The agent performs its task correctly
  • Safe: The agent does not cause harm
  • Secure and resilient: The agent resists attacks
  • Accountable and transparent: The agent's actions are logged and explainable
  • Explainable and interpretable: Decisions can be understood
  • Privacy-enhanced: Personal data is protected
  • Fair: The agent does not discriminate

For each characteristic, document how your agent system addresses it. The policy engine, content scanner, behavioral monitor, and audit trail provide evidence for security, accountability, and transparency. The other characteristics require additional controls specific to your use case.

Getting started

Start with the Map function: catalog your agent's tools, data access, and risks. Then implement Manage controls with Authensor. Use Measure to verify the controls work. Document everything under Govern.

Keep learning

Explore more guides on AI agent safety, prompt injection, and building secure systems.

View All Guides