Is EU AI Act self-assessment safe?
Self-classification guide for the EU AI Act. Most consumer AI SaaS is NOT high-risk under Annex III, but products in employment / education / credit scoring / law enforcement / critical infra / biometrics ARE. The penalty for getting this wrong is up to €35M or 7% of global turnover.
The EU AI Act becomes fully applicable Aug 2 2026 for high-risk Annex III systems. Self-classify against the 8 Annex III categories first; if you're out, you have transparency obligations only (Article 50). If you're in, you need full Article 11 + Annex IV documentation + risk management + post-market monitoring + conformity assessment + CE marking before placing the system on the EU market.
How it fails in production
Mis-classifying out of Annex III when in
The most common mistake. A 'helpful AI' that recommends candidates for hiring is high-risk under Annex III #4 (employment). A 'helpful AI' for tutoring is high-risk under #3 (education). Read each category against your specific use case carefully — the broad reading prevails.
Missing Article 11 + Annex IV documentation
Article 11 requires technical documentation across 7 mandated sections. Without it, conformity assessment fails — the system cannot legally be placed on the EU market. Penalty: up to €15M or 3% of global turnover.
No human oversight design (Article 14)
High-risk systems must allow human override + halt + audit-trail. Failing to design human oversight at the system level is a design gap that requires retrofitting before market placement.
No post-market monitoring (Article 61)
After placement, the provider must monitor model drift, log incidents within 15 days (Article 62), and re-evaluate risk-management quarterly. Skipping post-market monitoring revokes conformity over time.
How to ship safely on EU AI Act self-assessment
- Self-classify against Annex III using the artificialintelligenceact.eu summary; document the rationale in your risk register
- If high-risk: author Article 11 documentation per Annex IV (use /templates/article-11-technical-documentation)
- Emit CycloneDX 1.6 AIBOM on every release (use /templates/aibom-cyclonedx-template)
- Design human-oversight per Article 14 at the system level
- Establish post-market monitoring per Article 61
- Choose conformity assessment route (Annex VI self vs Annex VII Notified Body) before Aug 2 2026
Securie's `crates/sbom` emits AIBOM on every release; `crates/production-ready` ships the 50-control checklist that includes the EU AI Act high-risk subset; `crates/fair-engine` produces risk-management evidence required by Article 9. The /api/auditor/bundle/[commit] route assembles signed AIBOM + attestations + FAIR rollup into one auditor-consumable bundle. Note: a Notified Body conformity assessment is human work that Securie does NOT perform — Securie produces the technical evidence the assessment consumes.
Verdict
If you sell into the EU + your product touches credit, employment, education, law enforcement, migration, critical infra, or biometrics, you are likely in scope and the Aug 2 2026 deadline is real. Most B2B SaaS is OUT of scope. The cheapest mistake is to self-classify wrong — read each Annex III category broadly, document the classification rationale, and have your EU representative review.