AI-as-a-Service security — model isolation + prompt-injection + AIBOM
AI-as-a-Service (per-API-call inference, fine-tuning offerings, RAG-as-a-service) has unique threats: tenant-prompt isolation, prompt-injection, training-data contamination, AIBOM transparency.
Top security risks
Prompt injection
Customer A's adversarial prompt modifies behavior for Customer B in shared-state systems.
Training-data contamination
Customer prompts/responses leak into your next fine-tune by mistake.
EU AI Act high-risk classification
If your AI is used in credit / employment / education / etc., you're high-risk under Annex III.
Service-account key leakage
Customer's API key leaked → LLMjacking attack on your inference budget.
Regulatory context
EU AI Act (high-risk for downstream uses), GDPR (EU users), CCPA (California), state AI laws (NY LL144, CO AI Act).
Checklist
- Tenant-isolated inference (no shared model state)
- Prompt-injection defense (Llama Guard 4)
- AIBOM published per /legal/ai-bill-of-materials
- Training-data opt-in only
- Per-tenant LLM spend caps
- EU AI Act Annex III self-classification
Buyers ask for AIBOM (CycloneDX 1.6), training-data practice, prompt-injection defense, model-card documentation, and EU AI Act classification rationale.