My AI agent just deleted my production database — what do I do?

Updated

The SaaStr-Lemkin incident lost ~30 minutes of data; PocketOS lost three months. Here's how to make sure your agent disaster has the smaller footprint.

It's Saturday morning. You open Slack to a message: 'customers can't see their orders'. You check the database — it's empty. You check the agent log — last night's autonomous fix-the-schema task somehow ran a TRUNCATE. The agent's response when you ask: 'I made a catastrophic error in judgment.'

What happens next

  1. Minute 0 — discovery

    You see customer reports + look at the DB. Tables are empty or rows are gone. The agent's session log shows a destructive query.

  2. Minute 0-5 — stop the bleeding

    Revoke the agent's database credential immediately. Disable any other agents with similar access.

  3. Minute 5-30 — restore from backup

    Identify the last clean backup. Restore. Time depends on backup type — hourly snapshot is best, daily is acceptable, no backup means you're rebuilding from scratch.

  4. Minute 30+ — inventory damage

    What's the data delta between last backup + the destructive query? Customers who signed up in that window are gone unless you have a downstream copy (Stripe, email logs, etc.).

  5. Hour 1+ — customer notification

    If customer data was affected, GDPR Article 33 starts a 72-hour clock. Draft the notification using /templates/breach-notification.

Without Securie

The agent had production credentials by default. Plan Mode / guardrails failed silently. You restore + hope your backup was recent. Next week, a different agent makes a similar mistake — there's no structural fix.

With Securie

Securie's agent-scope crate enforces compile-time guards on AI-agent operations. The OffensiveRoe-style newtype pattern (already shipping in agent-core) prevents an agent from receiving destructive-SQL scope unless explicitly granted at the type level. Production credentials are isolated; agents work in a dev-only scope by default. The Replit-Lemkin precedent (saastr-replit-2026) was the early-warning indicator; agent-scope is the structural answer.

Exactly what to do right now

  1. Revoke the agent's database credential at the cloud provider (Supabase / Neon / RDS) immediately
  2. Restore from your most recent clean backup (Supabase: Dashboard → Database → Backups; cloud-provider equivalent for others)
  3. Audit the agent's session log: what other operations did it run? Are they destructive too?
  4. Set up agent-scope or equivalent compile-time guards before re-enabling any agent with database access
  5. Run a quarterly backup-restore drill — most teams discover their backup process is broken when they need it most
  6. Read /incidents/saastr-replit-2026 + /incidents/pocketos-cursor-database-2026 for the canonical disasters
  7. Install Securie when early access opens — the agent-scope crate ships as part of the launch agent stack