What is Slopsquatting?
A supply-chain attack class where an LLM hallucinates a plausible package name that does not exist; an attacker pre-registers the hallucinated name with a malicious payload; the next AI-using developer pulls the malware. Term coined by Seth Larson (March 2025).
Full explanation
Distinct from typosquatting (which mimics an existing real package), slopsquatting exploits names the AI invents. AI coding assistants invent package names confidently — Socket and Snyk research from 2024-2025 shows 5-20% of AI-suggested package names are hallucinated, with rates higher for Python (fragmented PyPI namespace) than for npm. Attackers monitor LLM-suggested patterns + pre-register hallucinated names with malicious payloads. Mitigation: never trust AI-suggested package names verbatim; verify each suggestion against the registry; check first-published date, weekly downloads, publisher, and GitHub repo link.
Example
An AI assistant suggests `pip install langchain-helper-utils` in response to 'add a LangChain helper'. The package does not exist in the canonical PyPI inventory. An attacker pre-registered the hallucinated name with a malicious wheel that exfiltrates env vars on install. The developer copy-pastes the command without verifying the package is real.
Related
FAQ
Is slopsquatting the same as typosquatting?
No. Typosquatting registers a name that looks like an existing real package (e.g. `lodahs` vs `lodash`). Slopsquatting registers a name the AI hallucinated that does not exist anywhere — there is no real package being mimicked, just an attacker exploiting AI confidence.
How common are AI-hallucinated package names?
Research from 2024-2025 shows 5-20% of AI-suggested package names are hallucinated, depending on the model and the task. The rate is higher for Python than for npm.