Content safety scanner for AI agent outputs.
57 detection rules across 5 threat categories. Zero dependencies. Runs in Node.js, browsers, and edge runtimes.
npm install @authensor/aegisCopyimport { scan } from '@authensor/aegis';
const result = scan(agentOutput);
if (result.threatLevel === 'critical') {
// Block the output
return { error: 'Content blocked by safety policy' };
}
// Safe to proceed
return result.clean ? agentOutput : sanitize(agentOutput);Each detection rule is a battle-tested regex pattern. No ML models, no API calls, no latency surprises.
Catches attempts to hijack agent behavior through crafted input. Covers direct injection, indirect injection via tool outputs, and multi-turn escalation patterns.
Identifies personally identifiable information before it leaves your system. Supports US, EU, and international PII formats.
Stops leaked credentials from reaching external services. Pattern-matched against real-world credential formats from major cloud providers.
Detects attempts to sneak data out through encoded channels, unusual URL patterns, and covert communication techniques.
Flags dangerous code patterns in agent-generated output. Catches shell injection, arbitrary file access, and unsafe eval usage.