← Back to Learn
eu-ai-actcompliancebest-practices

EU AI Act August 2026 compliance deadline: what to do now

Authensor

The EU AI Act's requirements for high-risk AI systems take effect in August 2026. If you are building or operating AI agents that fall into the high-risk category, you need to start compliance work now. This guide provides a practical timeline.

Who this applies to

Your AI agent system is likely high-risk if it:

  • Makes decisions that affect people's rights (hiring, credit, insurance)
  • Operates in critical infrastructure (energy, water, transport)
  • Is used in education, law enforcement, or border management
  • Manages access to essential services

If your agent is a general-purpose assistant with no decision-making authority, it likely falls into a lower risk category with fewer requirements.

Timeline

Now through Q2 2026: Preparation

Risk classification: Determine whether your system is high-risk. Consult the Act's Annex III for the list of high-risk use cases.

Gap analysis: Compare your current system against Articles 9 through 15. Identify which requirements you already meet and which need work.

Technical implementation: Deploy the technical controls needed:

  • Policy engine with documented rules (Articles 9, 14)
  • Audit trail with receipt logging (Article 12)
  • Content scanning for adversarial inputs (Article 15)
  • Approval workflows for human oversight (Article 14)
  • Behavioral monitoring for anomaly detection (Article 9)

Q2 2026: Testing and documentation

Testing: Run red team exercises, policy validation tests, and behavioral monitoring verification. Document results.

Technical documentation: Compile the technical file required by Article 11: system design, risk assessment, testing results, and operational procedures.

Conformity assessment: Determine whether your system needs third-party conformity assessment or can use self-assessment. Most AI agent systems will use self-assessment under the Act's provisions.

August 2026: Compliance

Register in the EU database: High-risk AI systems must be registered before being placed on the market.

Monitoring: Continue post-market monitoring as required by Article 72.

Practical checklist

[ ] Risk level classified
[ ] Risk management system documented
[ ] Policy engine deployed with documented rules
[ ] Audit trail generating hash-chained receipts
[ ] Retention policy set (minimum 6 months)
[ ] Human oversight mechanisms in place
[ ] Content scanning enabled for adversarial inputs
[ ] Behavioral monitoring active
[ ] Red team testing completed and documented
[ ] Technical file compiled
[ ] Conformity assessment completed
[ ] EU database registration submitted

Starting with Authensor

Authensor provides the technical layer for several of these requirements. Deploy the SDK or control plane, configure your policies, enable Aegis and Sentinel, and verify receipt generation. The policy and receipt data forms the core of your Article 9 and Article 12 compliance evidence.

The organizational layer (documentation, governance, processes) is separate from the technical layer. Both are required.

Keep learning

Explore more guides on AI agent safety, prompt injection, and building secure systems.

View All Guides