EU AI Act compliance for startups — the August 2 2026 deadline
EU AI Act high-risk Annex III enforcement begins Aug 2 2026. Article 11 + Annex IV technical documentation is functionally an AIBOM mandate. This guide walks the in-scope test, the documentation requirements, and the AIBOM template + Securie crates that produce the evidence.
The EU AI Act entered into force August 1 2024 and becomes fully applicable August 2 2026. High-risk Annex III systems require Article 11 technical documentation + risk-management evidence + post-market monitoring. Most consumer AI SaaS is NOT high-risk, but products in employment / education / credit scoring / law enforcement / critical infrastructure / migration ARE — and the penalty for getting it wrong is up to €35M or 7% of global revenue.
What it is
Article 11 + Annex IV require providers to maintain detailed technical documentation covering: training data characteristics, model specifications, system design, accuracy + robustness metrics, risk-management system, human-oversight design, and ongoing post-market monitoring. The AIBOM (AI Bill of Materials) is the canonical machine-readable form — CycloneDX 1.6 or SPDX 3 with AI extensions. OWASP's AIBOM project (launched 2025) ships an open-source generator + validator.
Vulnerable example
# Incomplete AIBOM — missing required Article 11 fields
bomFormat: CycloneDX
specVersion: "1.6"
components:
- type: machine-learning-model
name: my-credit-scoring-model
# MISSING: training data characteristics
# MISSING: model card link
# MISSING: license / license-source
# MISSING: inputs/outputs schema
# MISSING: accuracy/fairness metrics
# An auditor reviewing this fails Article 11 documentation.Fixed example
# Complete AIBOM — passes Article 11 + Annex IV
bomFormat: CycloneDX
specVersion: "1.6"
components:
- type: machine-learning-model
name: my-credit-scoring-model
version: "2.1.0"
licenses:
- license: { id: "CC-BY-NC-4.0" }
modelCard:
modelParameters:
task: "Binary classification — credit approval"
architectureFamily: "transformer"
datasets:
- type: "training"
governance:
owners: [{ contact: "data-eng@example.com" }]
custodians: [{ contact: "dpo@example.com" }]
quantitativeAnalysis:
performanceMetrics:
- type: "accuracy"
value: "0.892"
- type: "demographic-parity"
value: "0.04"
ethicalConsiderations:
- description: "In-scope for EU AI Act Annex III credit-scoring; post-market monitoring active per Article 61."How Securie catches it
apps/web/app/api/route.ts:22EU AI Act compliance for startups
Securie's `crates/sbom` emits CycloneDX 1.6 AIBOM on every build (alongside the standard SBOM); the format is wired into the attestation chain so every release ships a signed AIBOM. The `crates/production-ready` 50-control checklist includes the EU AI Act high-risk subset with per-control evidence. The /api/auditor/bundle/[commit] route assembles the AIBOM + signed attestations + FAIR risk-rollup into one bundle the auditor verifies with `cosign verify-blob`.
# Complete AIBOM — passes Article 11 + Annex IV
bomFormat: CycloneDX
specVersion: "1.6"
components:
- type: machine-learning-model
name: my-credit-scoring-model
version: "2.1.0"
licenses:
- license: { id: "CC-BY-NC-4.0" }
modelCard:
modelParameters:
task: "Binary classification — credit approval"
architectureFamily: "transformer"
datasets:
- type: "training"
governance:
owners: [{ contact: "data-eng@example.com" }]
custodians: [{ contact: "dpo@example.com" }]
quantitativeAnalysis:
performanceMetrics:
- type: "accuracy"
value: "0.892"
- type: "demographic-parity"
value: "0.04"
ethicalConsiderations:
- description: "In-scope for EU AI Act Annex III credit-scoring; post-market monitoring active per Article 61."Checklist
- Self-classify against Annex III: credit / employment / education / law enforcement / migration / critical infra = high-risk
- Author Article 11 technical documentation (use /templates/article-11-technical-documentation)
- Emit CycloneDX AIBOM on every release (use /templates/aibom-cyclonedx-template)
- Document risk-management system per Article 9
- Design human-oversight per Article 14 (override, halt, audit-trail)
- Establish post-market monitoring per Article 61
- Plan conformity assessment route — self vs Notified Body — before Aug 2 2026
- Designate an EU representative if you're a non-EU provider with EU users
FAQ
When does the deadline take effect?
August 2, 2026 for high-risk Annex III systems. The Act entered into force August 1 2024, so most teams have already missed the early-grace-period preparation window.
Are SaaS startups in scope?
Only if your product falls under Annex III high-risk categories: credit / employment / education / law enforcement / migration / critical infra / biometrics / certain child-protection contexts. Most B2B SaaS is NOT high-risk.
What's the penalty for non-compliance?
Up to €35M or 7% of worldwide annual turnover (whichever is higher) for prohibited-AI-practice violations. €15M or 3% for high-risk-system non-compliance with most Article 11 requirements.
Related guides
You're a one-person SaaS with a handful of EU customers. Do you need to be GDPR compliant? Yes. Here's the minimum viable version — what to collect, what to publish, what to skip until you're bigger.
The EU AI Act's second enforcement wave lands August 2026. If your product uses a large language model — directly or via a wrapper — here is what you need to publish, document, and do before the deadline.
California's privacy law applies to any SaaS that has a paying Californian customer. Here's the minimum viable compliance checklist, written for founders who've never done it before.
If your SaaS touches any health information — wearables, mental-health apps, telehealth — you may be subject to HIPAA. Here is how to tell if you are in scope, what it takes to comply, and when to just say no.