CrewAI enables multi-agent collaboration where specialized agents work together on tasks. The Authensor CrewAI adapter adds policy enforcement to every tool call across your entire crew, ensuring that no agent exceeds its authorized actions.
pip install authensor-crewai
from authensor.crewai import with_authensor
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool, FileWriteTool
search = with_authensor(SerperDevTool(), policy_path="./policy.yaml")
write = with_authensor(FileWriteTool(), policy_path="./policy.yaml")
The wrapped tools behave identically to the originals. CrewAI agents use them without any code changes.
Instead of wrapping individual tools, apply Authensor to the crew:
from authensor.crewai import AuthensorGuard
guard = AuthensorGuard(
policy_path="./policy.yaml",
aegis_enabled=True,
sentinel_enabled=True,
)
researcher = Agent(
role="Researcher",
goal="Find accurate information",
tools=[search],
)
writer = Agent(
role="Writer",
goal="Write reports",
tools=[write],
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
callbacks=[guard],
)
Different agents in a crew often need different permissions. A researcher should be able to search the web but not write files. A writer should be able to write files but not send emails. Define per-agent policies:
# policy-researcher.yaml
rules:
- tool: "search.*"
action: allow
- tool: "*"
action: block
reason: "Researcher can only search"
# policy-writer.yaml
rules:
- tool: "file.write"
action: allow
when:
args.path:
startsWith: "/output/"
- tool: "*"
action: block
reason: "Writer can only write to /output/"
researcher = Agent(
role="Researcher",
tools=[with_authensor(search, policy_path="./policy-researcher.yaml")],
)
When agents in a crew pass results to each other, Authensor tracks the chain of actions across agents. Each receipt includes the agent identity, so the audit trail shows which agent initiated each action.
When an agent's tool call is blocked, the agent receives an error message. CrewAI's built-in retry logic may cause the agent to try again or take an alternative approach. If you want blocked actions to stop the task entirely, configure the guard to raise an exception:
guard = AuthensorGuard(
policy_path="./policy.yaml",
on_block="raise", # raises AuthensorBlockedError
)
This gives you control over whether blocked actions are treated as soft failures (the agent adapts) or hard failures (the task stops).
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides