The EU AI Act's requirements for high-risk AI systems take effect in August 2026. If you are building or operating AI agents that fall into the high-risk category, you need to start compliance work now. This guide provides a practical timeline.
Your AI agent system is likely high-risk if it:
If your agent is a general-purpose assistant with no decision-making authority, it likely falls into a lower risk category with fewer requirements.
Risk classification: Determine whether your system is high-risk. Consult the Act's Annex III for the list of high-risk use cases.
Gap analysis: Compare your current system against Articles 9 through 15. Identify which requirements you already meet and which need work.
Technical implementation: Deploy the technical controls needed:
Testing: Run red team exercises, policy validation tests, and behavioral monitoring verification. Document results.
Technical documentation: Compile the technical file required by Article 11: system design, risk assessment, testing results, and operational procedures.
Conformity assessment: Determine whether your system needs third-party conformity assessment or can use self-assessment. Most AI agent systems will use self-assessment under the Act's provisions.
Register in the EU database: High-risk AI systems must be registered before being placed on the market.
Monitoring: Continue post-market monitoring as required by Article 72.
[ ] Risk level classified
[ ] Risk management system documented
[ ] Policy engine deployed with documented rules
[ ] Audit trail generating hash-chained receipts
[ ] Retention policy set (minimum 6 months)
[ ] Human oversight mechanisms in place
[ ] Content scanning enabled for adversarial inputs
[ ] Behavioral monitoring active
[ ] Red team testing completed and documented
[ ] Technical file compiled
[ ] Conformity assessment completed
[ ] EU database registration submitted
Authensor provides the technical layer for several of these requirements. Deploy the SDK or control plane, configure your policies, enable Aegis and Sentinel, and verify receipt generation. The policy and receipt data forms the core of your Article 9 and Article 12 compliance evidence.
The organizational layer (documentation, governance, processes) is separate from the technical layer. Both are required.
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides