What is OWASP LLM Top 10 (OWASP Top 10 for Large Language Model Applications)?
The OWASP project's canonical list of the 10 most critical security risks for LLM-powered applications. Distinct from the regular OWASP Top 10. Current edition (2025) covers LLM01 through LLM10.
Full explanation
The 2025 edition: LLM01 Prompt Injection, LLM02 Insecure Output Handling, LLM03 Training Data Poisoning, LLM04 Model Denial of Service, LLM05 Supply Chain Vulnerabilities, LLM06 Sensitive Information Disclosure, LLM07 Insecure Plugin Design, LLM08 Excessive Agency, LLM09 Overreliance, LLM10 Model Theft. Complements (does not replace) the regular OWASP Top 10 — most production AI apps need both. Referenced by every major 2026 AI-security report and by EU AI Act conformity-assessment guidance.
Example
A RAG-powered support chatbot that lets users upload documents has LLM01 (prompt injection from user input), LLM02 (raw LLM output rendered in the UI), LLM03 (training-data poisoning if uploaded docs feed back into a fine-tune loop), and LLM06 (sensitive info disclosure if support DB context leaks via prompt). Mitigation = input sanitize + output escape + poisoning-score on uploads + prompt template audit.
Related
FAQ
Is this the same as the regular OWASP Top 10?
No. The regular OWASP Top 10 (broken access control, injection, etc.) still applies to your app's auth and data layers. The LLM Top 10 covers the risks that only exist when you ship LLMs in production.
How often is it updated?
Roughly every 2-3 years. The 2023 edition was the first; 2025 is the current one as of mid-2026. Watch genai.owasp.org for the next refresh.