CrewAI orchestrates multiple AI agents working together on complex tasks. Each agent has its own role, tools, and capabilities. Authensor's CrewAI adapter enforces safety policies at the agent and tool level, ensuring that no crew member exceeds its authorized scope.
pip install authensor-crewai
The adapter provides decorators and middleware that integrate with CrewAI's agent and tool lifecycle.
In a CrewAI crew, different agents have different roles. A researcher agent should access search tools but not file system tools. A writer agent should create content but not execute code. Authensor lets you define per-agent policies that enforce these boundaries.
Map each CrewAI agent to an Authensor agent ID. The policy engine evaluates each tool call against the specific agent's authorized actions.
from authensor_crewai import SafeAgent
researcher = SafeAgent(
role="Researcher",
tools=[search_tool, web_scraper],
authensor_agent_id="crew-researcher",
)
The adapter wraps CrewAI tool execution with policy checks. Before any tool runs, the action envelope (tool name, arguments, agent ID) goes to the policy engine. Denied actions return an error message to the agent, which can then adjust its approach.
Beyond individual agent policies, set crew-level safety rules. These apply to all agents in the crew and handle cross-cutting concerns like rate limiting, total cost budgets, and forbidden action categories.
Crew-level policies also govern delegation. If one agent delegates a task to another, the policy engine verifies that the delegation is authorized and that the receiving agent has the necessary permissions.
Authensor's Sentinel engine monitors crew activity in aggregate. It tracks patterns like one agent dominating tool usage, unexpected delegation chains, or agents repeatedly hitting policy denials. These patterns often indicate either misconfiguration or an agent that has gone off track.
Every tool call, delegation, and safety decision across all crew members is recorded in the receipt chain. This gives you a complete timeline of what each agent did during a crew execution, invaluable for debugging and compliance.
Review crew audit trails after each run during development. In production, set up automated analysis to flag unusual patterns.
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides