← Back to Learn
prompt-injectioncontent-safetyexplainer

Information-Theoretic Prompt Injection Detection

Authensor

Prompt injection attacks insert adversarial instructions into text that an AI agent processes. Information-theoretic detection measures statistical properties of the input text and flags inputs whose properties deviate from expected distributions. This approach complements pattern-matching detection by catching injections that do not match known signatures.

Entropy-Based Detection

Shannon entropy measures the information content of a text. Normal user input and normal documents have characteristic entropy ranges that depend on language and domain. Prompt injections often have different entropy profiles because they contain instruction-like text mixed with content that manipulates the model's interpretation.

Calculate the entropy of the input and compare it to the expected range. Inputs with entropy significantly above or below the expected range warrant closer inspection.

Perplexity Analysis

Perplexity measures how surprised a language model is by a text. Normal text in a given domain has predictable perplexity. Injected instructions often increase perplexity because they break the expected linguistic patterns of the surrounding content.

A sudden spike in per-token perplexity within a document may indicate the boundary between legitimate content and injected instructions.

Mutual Information

Mutual information measures the statistical dependence between two variables. In the context of prompt injection, measure the mutual information between different segments of the input. Legitimate documents have high mutual information between segments because the content is topically coherent. Injected text introduces a segment with low mutual information relative to the surrounding content.

Compression-Based Detection

A related approach uses compression ratios. Compress the input with a standard algorithm. Inputs with injected instructions may have different compression ratios than homogeneous text because the injected portion has different statistical properties than the surrounding content.

Practical Considerations

Information-theoretic detectors are language-agnostic and do not require maintaining signature databases. They detect novel injections that have never been seen before, as long as the injection changes the statistical properties of the input.

However, they produce false positives on legitimate inputs with unusual statistical properties: code snippets in natural language text, multilingual documents, or highly technical content. Combine information-theoretic detection with pattern-based detection (like Aegis) for defense in depth.

Integration Points

Information-theoretic analysis runs as a preprocessing step before the main content scanner. Flag inputs that exceed statistical thresholds for deeper analysis. Inputs that pass statistical checks proceed to pattern-based scanning.

Statistical anomalies in text are signals. Not every signal is an attack, but every attack changes the signal.

Keep learning

Explore more guides on AI agent safety, prompt injection, and building secure systems.

View All Guides