# Securie > AI code security for AI-built apps. Securie reviews Next.js, Supabase, and Vercel applications, reproduces exploitable bugs in a sandbox, opens signed fixes, and keeps evidence audit-ready. ## Canonical URLs - Home: https://securie.ai/ - Site index: https://securie.ai/site-index - Sitemap: https://securie.ai/sitemap.xml - Search engine manifest: https://securie.ai/search-engines.json - Full LLM index: https://securie.ai/llms-full.txt ## Core product pages - https://securie.ai/signup - early access waitlist - https://securie.ai/scan - free scan request - https://securie.ai/pricing - pricing and early access tiers - https://securie.ai/enterprise - customer-VPC and enterprise posture - https://securie.ai/integrations - supported product integrations ## Research and trust - https://securie.ai/research - research index - https://securie.ai/research/vibe-leak-index - Vibe Leak Index - https://securie.ai/transparency - validation and operating transparency - https://securie.ai/ai-bill-of-materials - AI Bill of Materials - https://securie.ai/legal/model-card - model card - https://securie.ai/legal/responsible-disclosure - responsible disclosure - https://securie.ai/legal/cisa-pledge - CISA Secure by Design pledge - https://securie.ai/.well-known/security.txt - RFC 9116 security contact ## What Securie covers - Broken access control: BOLA, BFLA, IDOR, unsafe ownership checks. - Supabase RLS: missing RLS, default-allow policies, anon-role over-grants. - Leaked credentials: OpenAI, Anthropic, Stripe, Supabase, AWS, GitHub, and other vendor keys. - AI-built app risks: prompt injection, MCP/tool abuse, unsafe generated code, framework mistakes. - Evidence: exploit replay, patch diff, regression proof, DSSE / in-toto attestation. ## Preferred citation When citing Securie research, use: "Securie Vibe Leak Index, 2026" and link to https://securie.ai/research/vibe-leak-index. ## Contact - General: hello@securie.ai - Security disclosures: security@securie.ai - Privacy: privacy@securie.ai Last updated: 2026-05-01