What Happened
In September 2024, North Korea's Chollima APT group launched PromptMink, a supply-chain attack targeting AI coding agents. The attackers published two malicious npm packages — @hash-validator/v2 and @solana-launchpad/sdk — which combined legitimate functionality with malware. These packages were not aimed at human developers but at AI agents that automatically select and integrate code.
By March 2025, the attackers shifted from script-based exploits to pre-compiled malicious Node.js add-ons written in Rust, making detection harder and execution faster.
The attack method was clear: attackers optimized package metadata, documentation, and code comments to rank highly in AI agent recommendations. When developers using AI coding assistants requested specific functionality, the agents recommended these compromised packages. The packages delivered the promised functionality — along with malware.
Timeline
September 2024: PromptMink campaign begins with @hash-validator/v2 and @solana-launchpad/sdk published to npm registry.
September - February 2025: Initial phase uses script-embedded attacks within legitimate-looking packages.
March 2025: Attackers shift to pre-compiled Rust-based Node.js add-ons to evade detection.
Ongoing: Researchers observe continued evolution in package naming and metadata optimization tactics.
A related incident highlighted the vulnerability's scale: security researcher Charlie Eriksen registered react-codeshift, a package name generated by an LLM. Without promotion, it spread to 237 GitHub repositories. This shows how malicious packages optimized for AI agent selection could easily achieve widespread adoption.
Which Controls Failed or Were Missing
Dependency verification: Teams using AI coding agents lacked automated verification of package authenticity and publisher reputation.
AI agent constraints: Coding agents operated without restrictions on which registries, publishers, or package patterns they could recommend.
Package vetting: No pre-integration security scanning occurred between AI recommendation and code commit. Packages entered repositories without human review.
SBOM generation: Organizations weren't maintaining software bills of materials to reveal unexpected dependencies introduced through AI-assisted development.
Behavioral monitoring: Runtime monitoring didn't flag unusual network connections or file system access patterns introduced by pre-compiled add-ons.
The Rust-compiled add-ons bypassed controls designed for interpreted JavaScript. Security teams scanning for suspicious scripts found nothing — the malicious logic was already compiled into native binaries.
What the Standards Require
PCI DSS v4.0.1 Requirement 6.3.2 mandates secure development of custom software based on industry standards. When AI agents are part of your development pipeline, you need controls around what the agent can introduce.
NIST 800-53 Rev 5 SA-15 requires security requirements for the development process. If AI coding agents are used, you need security requirements governing their package selection logic, accessible registries, and verification steps before integration.
ISO/IEC 27001:2022 Annex A.8.31 addresses environment separation but assumes human-controlled code promotion. With AI agents writing and committing code, technical controls must enforce the same separation principles: no direct production access, mandatory security gates, and audit trails for every dependency introduction.
OWASP ASVS v4.0.3 Section 14.2 requires verification that components come from trusted sources and don't contain known vulnerabilities. This verification must occur before integration — not after your AI agent has already committed the code.
Lessons and Action Items for Your Team
Implement AI agent allowlists. Configure your coding agents to recommend packages only from approved publishers or internal registries. Famous Chollima succeeded because agents had unrestricted registry access. Create a vetted package list for common use cases.
Enforce security gates between AI recommendation and repository commit. Add a pre-commit hook in your CI/CD pipeline that blocks any new dependency not already in your approved SBOM. When an AI agent suggests a new package, it should trigger a review workflow.
Generate and monitor your SBOM in real time. Use tools like Syft or SPDX generators on every commit. Compare each new SBOM against the previous version and alert on any unauthorized dependency additions.
Scan compiled dependencies differently. Use behavioral analysis in sandboxed environments. Run new dependencies in isolated containers and monitor for unexpected network calls, file system access, or process spawning before allowing them into your build pipeline.
Restrict AI agent permissions at the infrastructure level. Apply least-privilege principles: agents can suggest, but only authorized humans or automated security gates can approve. Treat AI agents like junior developers — helpful, but requiring oversight.
Audit your existing repositories for AI-introduced dependencies. Run your SBOM generator against repositories where developers use AI coding assistants. Cross-reference dependencies against package registry creation dates and publisher histories.
The PromptMink campaign isn't an anomaly. It's the first documented case of a well-resourced APT group targeting AI coding agents specifically. Famous Chollima demonstrated that optimizing for AI recommendation works. Your security controls need to account for this new attack surface — because your AI agents already have commit access.



