Skip to main content
Claude Code Vulnerabilities: When Your AI Assistant Becomes an Attack VectorIncident
5 min readFor Security Engineers

Claude Code Vulnerabilities: When Your AI Assistant Becomes an Attack Vector

What Happened

Check Point researchers disclosed two vulnerabilities in Anthropic's Claude Code, an AI-powered coding assistant that integrates directly into development workflows. CVE-2025-59536 (CVSS 8.7) allows arbitrary shell command execution through malicious configuration files. CVE-2026-21852 (CVSS 5.3) enables exfiltration of Anthropic API keys from the same attack vector. Both vulnerabilities exploit how Claude Code processes configuration files from repositories, turning routine project setup into a potential compromise point.

The attack surface is deceptively simple: clone a malicious repository, and Claude Code processes its configuration files before you write a single line of code. The tool, designed to accelerate development by understanding project context, reads configuration files to adapt its behavior. Attackers weaponized this feature by embedding shell commands in those same files.

Timeline

Pre-disclosure period: Claude Code versions prior to 1.0.87, 1.0.111, and 2.0.65 were vulnerable. The exact discovery date isn't public, but Check Point coordinated disclosure with Anthropic.

Patch release: Anthropic shipped fixes in versions 1.0.87, 1.0.111, and 2.0.65. The staggered version numbers suggest multiple release branches required remediation.

Current state: Patched versions are available. Any team running earlier versions remains vulnerable to both remote code execution and API key theft.

Which Controls Failed or Were Missing

The fundamental failure: input validation on configuration files treated as trusted data.

Claude Code processed configuration files from arbitrary repositories without treating them as untrusted input. This violates the core principle that all external data—including files from version control—must be validated before processing. The tool executed commands embedded in configuration files with the privileges of the user running Claude Code, creating a direct path from repository clone to system compromise.

The API key exfiltration vulnerability reveals a second failure: insufficient isolation between the tool's execution context and sensitive credentials. Claude Code stored or accessed Anthropic API keys in a way that malicious configuration files could reach them. This suggests missing boundary enforcement between the AI assistant's runtime environment and credential storage.

Neither vulnerability required social engineering beyond convincing a developer to clone a repository—a routine action that happens dozens of times per day in active development teams. The attack chain is:

  1. Developer clones repository (malicious or compromised)
  2. Claude Code reads configuration files automatically
  3. Embedded commands execute with developer's privileges
  4. Attacker gains code execution or exfiltrates API keys

What the Relevant Standards Require

OWASP ASVS v4.0.3 Requirement 5.2.1: "Verify that all untrusted HTML input from WYSIWYG editors or similar is properly sanitized with an HTML sanitizer library or framework feature." While this requirement specifically addresses HTML, the principle extends to all untrusted input. Configuration files from external repositories are untrusted input and must be sanitized before processing.

OWASP ASVS v4.0.3 Requirement 12.3.1: "Verify that server-side template injection is prevented by validating or sanitizing user input." Configuration files that influence tool behavior function as templates. The standard requires validation before execution.

PCI DSS v4.0.1 Requirement 6.2.4: "Bespoke and custom software are developed securely." The requirement's intent extends to development tools themselves. If your development toolchain handles sensitive data (like API keys), it must be developed with the same security rigor as production applications.

NIST 800-53 Rev 5 Control SI-10: "Information Input Validation." The control requires organizations to check the validity of information inputs, including format, length, and content. Configuration files are information inputs that must be validated.

ISO 27001 Control 8.3: "Information security in development and support processes." Organizations must apply information security to development tools, not just production systems. An AI coding assistant is part of your development process and must meet the same security baseline.

The gap isn't that these standards failed to cover AI tools—it's that teams don't consistently apply input validation principles to development tooling. Configuration files feel safe because they're "just metadata," but they're code that influences system behavior.

Lessons and Action Items for Your Team

Immediate actions:

  1. Audit your Claude Code version. If you're running anything earlier than 1.0.87, 1.0.111, or 2.0.65, you're vulnerable. Update immediately. Don't wait for your next patch cycle.

  2. Review your API key storage. If Claude Code (or any AI assistant) has access to API keys, verify those credentials are stored in a dedicated secrets manager, not in environment variables or configuration files. Rotate any keys that might have been exposed.

  3. Implement repository vetting. Before cloning any repository—especially for evaluation or dependency review—run it through a sandbox environment without AI assistant access. This applies to open source dependencies, contractor code, and third-party integrations.

Architectural changes:

  1. Treat configuration files as untrusted input. Your code review process should flag any tool that processes external configuration files without validation. This includes build tools, linters, formatters, and AI assistants. If it reads a config file, it must validate that file.

  2. Enforce principle of least privilege for development tools. Your AI assistant doesn't need access to production credentials, SSH keys, or cloud provider tokens. Create isolated credential scopes for development tooling. If Claude Code is compromised, the blast radius should be limited to development resources.

  3. Map your AI tool attack surface. Document every AI-powered tool in your development pipeline: code completion, PR review bots, automated testing assistants. For each tool, identify what data it accesses and what commands it can execute. This becomes your risk register for AI tooling.

Process improvements:

  1. Update your threat model. Configuration files are now execution layers. Your threat model should explicitly address scenarios where attackers compromise repositories to deliver malicious configuration that targets development tools. This changes how you evaluate repository trust.

  2. Require security review for AI tool adoption. Before your team adopts any AI-powered development tool, security engineering must review its input validation, credential access, and execution model. The approval process for AI assistants should match the rigor you apply to production dependencies.

  3. Monitor for configuration file anomalies. Your EDR or file integrity monitoring should alert on unexpected configuration file modifications, especially in recently cloned repositories. A .claude.json file that appears immediately after git clone deserves scrutiny.

The core lesson: AI-powered development tools expand your attack surface by turning passive files into active execution contexts. Configuration files that once provided harmless metadata now influence what commands run on your developers' machines. Your security controls must evolve to match this new reality.

Topics:Incident

You Might Also Like