When connecting an AI agent to MCP servers, you have two options: connect directly or route through a gateway. Each approach has different security, performance, and operational characteristics.
The agent connects directly to each MCP server:
Agent → MCP Server A (filesystem)
Agent → MCP Server B (database)
Agent → MCP Server C (web search)
Advantages:
Disadvantages:
The agent connects to the gateway, which connects to the MCP servers:
Agent → MCP Gateway → MCP Server A
→ MCP Server B
→ MCP Server C
Advantages:
Disadvantages:
Use the SDK for application-level safety and the gateway for infrastructure-level safety:
// Application level: SDK guard
const guard = createGuard({ policy: applicationPolicy });
// Infrastructure level: route through gateway
const mcpConfig = {
servers: {
gateway: { url: 'sse://gateway.internal:3000' }
}
};
The SDK enforces application-specific rules. The gateway enforces organization-wide rules. Both generate receipts that form a complete audit trail.
The gateway adds one proxy hop per tool call. In practice, this is 1-5ms of additional latency. For most agent workloads, LLM inference takes 500ms to 5 seconds. An additional 5ms is negligible.
If latency is critical, co-locate the gateway with the agent on the same machine or in the same data center. The policy engine runs in microseconds; the latency is primarily network overhead.
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides