Can I get sued for security bugs in AI-written code?
Yes. You're legally responsible for what your app does to users, regardless of who or what wrote the code. 'AI wrote it' isn't a defense — it's your product. In the US: state privacy laws (CCPA, NYSHIELD), FTC Section 5, tort negligence. In the EU: GDPR applies regardless of authorship.
There is no 'AI wrote it, not me' defense in any jurisdiction. Legally, your app is your responsibility, and if it leaks user data, you're on the hook.
**What you could face in the US** after a breach: - State-level class-action lawsuits (California's CCPA private right of action, Illinois's BIPA for biometrics, etc.) — cost 100K-50M depending on scale - FTC enforcement for unfair / deceptive practices — consent decrees typically include 20 years of audits - State AG investigations (NY, CA, TX, WA are most active) — penalties up to $7500 per affected record in some cases - Regulatory notices to every state where you have affected users — required in 50 of 50 states
**In the EU/UK under GDPR:** - Up to 4% of global revenue or €20M, whichever is higher - Data Protection Authority investigations - Lawsuits from individual users for non-material damages (recent EU court rulings expanded this significantly)
**What reduces your exposure:** 1. Reasonable security measures documented in writing — even a one-page 'security practices' page helps 2. Prompt breach notification (72 hours GDPR, varies by state US) 3. A `/.well-known/security.txt` file so researchers can report bugs before attackers exploit them 4. A DPA signed with enterprise customers 5. A pre-launch security scan on record, with the findings fixed
Securie's free scan (launching this year) will give you a documented, time-stamped snapshot of what you checked before launch. Join the list now and the snapshot runs in week 1 — the single most defensible artifact you can have if something goes wrong later.