Your AI coding assistant just installed a malicious package. Not because it's compromised—because it made a typo.
The SANDWORM_MODE campaign shows how AI tools like Claude Code and OpenClaw have become prime targets for supply chain attacks. At least 19 typosquatted npm packages exploited a simple truth: AI tools suggest dependencies rapidly and don't verify package names like a human would.
This checklist helps you secure AI-integrated development workflows against supply chain attacks targeting both human developers and their AI assistants.
Prerequisites
Before implementing these controls, ensure you have:
- Package manager audit logs - Your npm, pip, or Maven registry must log installation attempts with timestamps and initiating user/process.
- CI pipeline isolation - Each build runs in an ephemeral environment with credential access limited to that specific job.
- Dependency manifest version control - All package.json, requirements.txt, and similar files are tracked with commit history showing who changed what.
Goal: You should be able to answer "what package was installed, by whom, from where, at what time" for any installation in the last 90 days.
Checklist Items
1. Implement Typosquatting Detection at the Registry Level
Configure your package manager to flag installations where the package name is within 2-3 Levenshtein distance of a known popular package. Block installations until manual review.
Specific requirement: If you're under PCI DSS v4.0.1, this supports Requirement 6.3.2 (security of bespoke and custom software).
Goal: When someone (or an AI tool) tries to install "reqeusts" instead of "requests", the installation fails with a clear warning listing the similar package name.
2. Restrict AI Tool Package Installation Permissions
Your AI coding assistants should not have direct write access to package manifests or the ability to execute installation commands. They can suggest; humans must approve and execute.
Implementation: Configure tools like GitHub Copilot, Claude Code, or Cursor to operate in suggestion-only mode. Create a review workflow where AI-suggested dependencies require a second human approval before installation.
Goal: Your CI logs show zero packages installed directly by AI tool automation. Every installation traces to a human commit with a reviewed diff.
3. Pin All Transitive Dependencies with Cryptographic Hashes
Lock files (package-lock.json, poetry.lock, go.sum) must include integrity hashes for every dependency, including nested ones. Reject any installation that doesn't match the recorded hash.
Why this matters: The SANDWORM_MODE campaign used weaponized GitHub Actions to modify repositories. Hash pinning prevents malicious updates from silently replacing legitimate packages.
Goal: Your package-lock.json contains "integrity": "sha512-..." for every single entry, and npm ci fails if any hash mismatches.
4. Implement Scoped, Short-Lived Tokens in CI Pipelines
Following npm's recent security improvements, configure CI pipelines to use tokens that:
- Expire within 24 hours
- Are scoped to specific packages or registries
- Cannot be reused across different pipeline jobs
Specific requirement: Aligns with NIST 800-53 Rev 5 control IA-5(1) for password-based authentication alternatives.
Goal: Your GitHub Actions workflow generates a fresh npm token at job start, uses it only for that build, and the token becomes invalid before the job completes.
5. Enable Mandatory 2FA for All Package Publishing Accounts
Every account with publish rights to your internal or public packages must use hardware tokens or authenticator apps. SMS-based 2FA does not meet this requirement.
Note: npm has implemented mandatory 2FA for high-impact packages. Extend this to all your publishing accounts.
Goal: You cannot publish a package without completing a TOTP challenge or hardware key verification. No exceptions for "emergency" publishes.
6. Monitor for Credential Exfiltration Patterns
Deploy runtime monitoring that detects when a development process:
- Accesses SSH keys, AWS credentials, or .npmrc tokens
- Makes outbound network connections to non-allowlisted domains
- Writes to home directories outside the project workspace
Why this matters: SANDWORM_MODE included a "dead switch" to wipe home directories—your monitoring should catch suspicious file system access before damage occurs.
Goal: Your EDR or container runtime security alerts when a Node.js process reads ~/.ssh/id_rsa and immediately attempts an HTTPS POST to an unknown domain.
7. Isolate AI Tool Execution Environments
Run AI coding assistants in sandboxed containers or VMs with:
- No access to production credentials
- Network egress limited to approved package registries and AI service endpoints
- File system access restricted to the current project directory
Goal: Your AI tool can read your code and suggest changes, but cannot access your AWS credentials file or SSH to your production database.
8. Audit Dependency Changes in Pull Requests
Configure automated checks that flag PRs containing:
- New dependencies added to package manifests
- Changes to lock file hashes
- Modifications to CI pipeline configuration
Specific requirement: Supports SOC 2 Type II CC6.1 (logical access controls) and CC7.2 (change management).
Goal: Every PR that touches package.json triggers a required review from your security team, with a diff showing exactly what packages were added and why.
Common Mistakes
Trusting AI Suggestions Without Verification - AI tools don't understand supply chain security. They optimize for "does this code work" not "is this package malicious." Treat every AI-suggested dependency as untrusted input.
Assuming Typosquatting Only Affects Beginners - The SANDWORM_MODE campaign specifically targeted AI tools because they process package names at machine speed. Your senior developers aren't the vulnerability—their tooling is.
Skipping Transitive Dependency Review - Attackers know you review direct dependencies. They hide malicious code three levels deep in the dependency tree. Your hash pinning must cover everything.
Using the Same Credentials Across Environments - If your CI pipeline uses the same npm token as your local development machine, a compromised laptop gives attackers pipeline access. Scope and rotate aggressively.
Next Steps
This week: Audit which AI tools have package installation permissions. Revoke direct execution rights.
This month: Implement hash pinning for all lock files and configure CI to reject installations without matching hashes.
This quarter: Deploy runtime monitoring for credential access patterns and establish a dependency review workflow that includes AI-suggested packages.
The attack surface just expanded. Your AI tools are writing code faster than you can review it—and attackers know it. These controls ensure that speed doesn't come at the cost of supply chain integrity.



