The Model Context Protocol (MCP) lets AI agents discover and call tools exposed by external servers. An MCP gateway sits between the agent and those servers, enforcing your safety policy on every tool call before it reaches the target server.
Without a gateway, your agent talks directly to MCP servers. Any tool the server exposes is callable. If the server is compromised or if the agent is manipulated by a prompt injection, there is no enforcement point to stop dangerous actions.
A gateway adds that enforcement point. Every tool call passes through policy evaluation, content scanning, and audit logging before being forwarded to the target server.
pnpm add @authensor/mcp-server
Create a configuration file:
# mcp-gateway.yaml
listen:
transport: stdio # or "sse" for HTTP
upstream:
- name: "filesystem"
url: "stdio://npx @modelcontextprotocol/server-filesystem /tmp"
- name: "database"
url: "sse://localhost:3001/mcp"
policy: "./policy.yaml"
aegis:
enabled: true
sentinel:
enabled: true
npx authensor mcp-gateway --config ./mcp-gateway.yaml
The gateway starts as an MCP server itself. Point your AI agent at the gateway instead of the upstream servers. The agent sees the same tools, but every call is now mediated.
If the policy blocks the call, the agent receives an error response. If the policy escalates, the call is held until a human approves it.
| Feature | Direct MCP | MCP Gateway | |---------|-----------|-------------| | Policy enforcement | None | Every call | | Content scanning | None | Inbound and outbound | | Audit trail | None | Hash-chained receipts | | Approval workflows | None | Built-in escalation | | Monitoring | None | Sentinel anomaly detection |
The gateway adds a few milliseconds of latency per call. For most agent workloads this is negligible compared to the LLM inference time.
For production, run the gateway as a persistent service behind your infrastructure. The Docker deployment guide covers container-based setups. Use environment variables for secrets and configure the gateway to connect to your control plane for centralized policy management.
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides