Article 9 of the EU AI Act requires providers of high-risk AI systems to establish and maintain a risk management system. For AI agents, this means systematically identifying what can go wrong and implementing controls to reduce those risks.
The risk management system must:
This is a continuous process. The risk management system must be updated throughout the AI system's lifecycle, not created once and forgotten.
AI agents introduce specific risks that traditional software does not:
For each risk, document the likelihood, potential impact, and mitigation measures.
Map each identified risk to a concrete technical control:
| Risk | Mitigation | Implementation |
|------|-----------|----------------|
| Prompt injection | Content scanning | Aegis with prompt injection detectors |
| Tool misuse | Argument-level policy rules | Policy engine with when conditions |
| Privilege escalation | Least-privilege policies | Deny-by-default policy, explicit allow rules |
| Data exfiltration | Output filtering | Aegis scanning on outbound tool calls |
| Uncontrolled autonomy | Rate limits, approval workflows | Policy escalation rules, budget constraints |
Article 9 requires testing that risk management measures actually work. For AI agents, this means:
Document every identified risk, the assessment of its severity, the chosen mitigation, and the testing results. This documentation is part of the technical file required under Article 11 and will be reviewed during conformity assessment.
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides