← Back to Learn
sdkguardrailstutorialagent-safety

LangChain Safety Middleware Guide

Authensor

LangChain is one of the most popular frameworks for building AI agents. Authensor's LangChain adapter integrates policy enforcement, content scanning, and audit logging directly into your chain and graph pipelines with minimal code changes.

Installation

Install the adapter alongside the core SDK:

npm install @authensor/langchain @authensor/sdk

The adapter provides LangChain-compatible callback handlers and tool wrappers that route safety checks through Authensor's engine.

Tool Wrapping

The primary integration point is tool wrapping. Authensor's adapter wraps each LangChain tool with a safety layer that evaluates every tool call against your policy before execution.

import { wrapTools } from '@authensor/langchain';

const safeTools = wrapTools(tools, {
  authensor: client,
  agentId: 'my-langchain-agent',
});

Each tool call is submitted to the policy engine as an action envelope. The policy evaluates whether the agent is authorized to use that tool with those arguments. Denied calls throw an error that LangChain handles as a tool failure.

Callback Handler

The Authensor callback handler logs every LLM call, tool invocation, and chain step to the audit trail. Attach it to your chain or agent executor:

import { AuthensorCallbackHandler } from '@authensor/langchain';

const handler = new AuthensorCallbackHandler({ client });
const agent = createAgent({ tools: safeTools, callbacks: [handler] });

LangGraph Integration

For LangGraph stateful agents, wrap the tool node with Authensor's safety middleware. The adapter respects LangGraph's state management, passing agent state context to the policy engine for richer evaluation.

Policy Configuration

Write policies that reference LangChain tool names. For example, deny the shell tool in production while allowing it in development. Use Authensor's environment-based policy resolution to load different rules per environment.

Error Handling

When a tool call is denied, the adapter returns a structured error message to the LLM. The model can then explain to the user why the action was blocked or attempt an alternative approach. This keeps the agent loop running rather than crashing on denials.

Configure escalation policies for sensitive tools. Instead of denying outright, route the action to a human approver through Authensor's approval workflow system.

Keep learning

Explore more guides on AI agent safety, prompt injection, and building secure systems.

View All Guides