On a Tuesday morning, developers running npm update unknowingly downloaded [email protected] — an unauthorized release published without human approval. The package remained live for eight hours, during which an AI issue triage bot inadvertently became part of a supply chain attack affecting a tool with over 5 million users.
This incident wasn't a zero-day exploit. Security researcher Adnan Khan identified the attack, named "Clinejection," which took advantage of known vulnerabilities in how GitHub Actions workflows handle external input. The unique aspect was the target: an AI agent within a CI/CD pipeline.
What Happened
An attacker submitted a malicious issue to the Cline repository. The issue contained crafted content designed to exploit the repository's AI-powered triage bot. When the bot processed the issue, it triggered a GitHub Actions workflow that published an unauthorized version of the Cline CLI package to npm.
The attack succeeded because:
- The AI bot had write access to workflow files.
- The GitHub Actions workflow accepted untrusted input from issue content.
- The workflow had npm publishing credentials.
- No human approval gate existed between the bot's actions and package publication.
The unauthorized package was available on npm for about eight hours before detection and removal.
Timeline
- Initial compromise: Attacker submits a crafted issue to the Cline repository.
- Bot processing: AI triage bot processes the malicious issue content.
- Workflow execution: GitHub Actions workflow runs with tainted input.
- Package publication: Unauthorized [email protected] published to npm registry.
- +8 hours: Unauthorized package detected and removed.
- Post-incident: Snyk and Cline announce a security partnership to address AI-specific vulnerabilities.
Which Controls Failed
Input validation on workflow triggers. The GitHub Actions workflow accepted issue content as trusted input without sanitization, violating the principle that all external input is untrusted until validated.
Least privilege for bot accounts. The AI bot had permissions to modify workflows and trigger publishing actions, which it didn't need.
Separation of duties. There was no approval mechanism separating automated actions from package publication, allowing a single compromised component to complete the attack chain.
Monitoring and alerting. Eight hours passed before the unauthorized package was detected. No automated checks flagged the unexpected publish event or version number.
What the Standards Require
PCI DSS v4.0.1 Requirement 6.4.3 mandates that scripts execute with minimal privileges necessary. An AI bot processing external issues doesn't need npm publishing rights. If your CI/CD pipeline handles payment data or operates in a PCI environment, this separation is mandatory.
OWASP ASVS v4.0.3 Section 5.1.1 requires that input validation occurs on a trusted system using a positive allowlist. Issue content is untrusted input. The workflow should have validated and sanitized this input before processing, especially before executing commands or modifying files.
ISO/IEC 27001:2022 Control 8.2 addresses privileged access rights. The AI bot should operate under a service account with permissions limited to its actual function: reading issues, adding labels, posting comments. Publishing packages falls outside this scope.
NIST 800-53 Rev 5 Control AC-6 requires least privilege and separation of duties. Your automation accounts need explicit permission boundaries. If a bot can both modify code and publish releases, you've created a single point of compromise.
Lessons and Action Items
Audit your bot permissions today. List every automated account in your repositories. Document what each bot can do versus what it needs to do. If you're running AI agents for issue triage, code review, or dependency updates, they probably have too much access.
Treat AI agents as untrusted executors. Your AI bot processes external input — issue text, pull request descriptions, commit messages. Apply the same input validation you'd use for a web form. Sanitize before execution. Use allowlists, not denylists.
Separate automation from publication. Create a hard break between automated workflows and package publishing. Require human approval for releases, even if the build and test pipeline is fully automated. GitHub's environment protection rules can enforce this at the workflow level.
Implement workflow input validation. If your GitHub Actions workflows use github.event.issue.body or similar fields, you're accepting untrusted input. Use the inputs context with explicit validation instead. Better: don't let external content flow into commands at all.
Monitor your package registries. Set up alerts for unexpected publishes. If you publish to npm weekly and suddenly see a release on Tuesday at 3 AM, that's a signal. Track version numbers, publish timestamps, and the accounts performing publishes.
Scope your publishing tokens. npm, PyPI, and other registries support scoped tokens with limited permissions. Your CI/CD pipeline doesn't need a token that can publish every package in your organization. Create per-package tokens and rotate them.
Review your Actions workflows for injection risks. Search your .github/workflows directory for ${{ github.event patterns. Each one is a potential injection point. The GitHub Security Lab maintains a list of dangerous workflow patterns — use it.
Test your detection. Can you identify an unauthorized package publish within minutes, not hours? Run a drill: publish a test package from an unexpected account or at an unusual time. Measure how long until someone notices.
The Clinejection incident shows that AI agents aren't just productivity tools — they're new components in your threat model. When you add an AI bot to your repository, you're adding an executor that processes untrusted input and takes actions on your behalf. Secure it accordingly.



