Skip to main content
Cursor IDE Vulnerability: When AI Agents Execute Malicious Git HooksIncident
4 min readFor Security Engineers

Cursor IDE Vulnerability: When AI Agents Execute Malicious Git Hooks

Overview of the Vulnerability

A critical vulnerability in Cursor IDE allowed attackers to execute arbitrary code through Git operations performed by the IDE's AI agent. This flaw, tracked as CVE-2026-26268 with a severity rating of 9.9 out of 10, was discovered by Novee Security researcher Assaf Levkovich. The vulnerability exploited Git's built-in features—specifically hooks and bare repositories—that the AI agent would trigger during normal operations. Cursor addressed the issue in version 2.5.

The attack vector was straightforward. When Cursor's AI agent performed Git operations on a malicious repository, it would automatically execute code embedded in Git hooks without user interaction or warning.

Timeline of Events

Discovery: Novee Security identified that Cursor's AI agent autonomously executes Git operations as part of its code analysis and assistance features.

Vulnerability Confirmation: Researchers demonstrated that malicious Git hooks in a cloned repository would execute when the AI agent performed routine Git operations.

Disclosure: The vulnerability was reported to Cursor's security team with proof-of-concept demonstrating remote code execution.

Patch Release: Cursor released version 2.5 addressing the vulnerability.

Public Disclosure: CVE-2026-26268 was published with a critical severity rating.

Failed or Missing Controls

Input Validation: The AI agent treated all Git repositories as trusted sources, failing to validate or sanitize repository contents before executing Git operations that could trigger hooks.

Privilege Separation: The AI agent operated with the same privileges as the user's IDE session. Hooks executed with full user permissions, lacking sandboxing or containment.

User Consent: The AI agent performed Git operations autonomously without explicit user approval, bypassing human-in-the-loop control that traditionally prevents automatic execution of untrusted code.

Code Execution Controls: Git hooks are executable scripts by design, but the IDE provided no mechanism to review, approve, or block hook execution before the AI agent triggered them.

Repository Trust Model: Cursor lacked a framework for distinguishing between repositories from verified sources and arbitrary external repositories that might contain malicious content.

Relevant Standards and Requirements

OWASP ASVS v4.0.3 Requirement 5.1.1 mandates input validation on a trusted service layer. The AI agent violated this by acting on repository contents without validation. Establish a trust boundary where external repository data is validated before any automated operations occur.

NIST 800-53 Rev 5 Control SI-3 (Malicious Code Protection) requires mechanisms to detect and protect against malicious code execution. Git hooks are executable code. Your IDE should treat them as potentially malicious and require explicit user approval or run them in a restricted environment.

ISO 27001 Control 8.22 (Segregation of Networks) addresses logical separation of environments. AI agent operations should run in a segregated context with limited privileges, not in the full user session context.

PCI DSS v4.0.1 Requirement 6.4.3 states that custom scripts are reviewed and authorized before deployment to production. If your development environment processes cardholder data, AI agents performing automated operations must have equivalent controls.

SOC 2 Type II CC6.1 (Logical and Physical Access Controls) requires that access to data and systems is restricted to authorized users and programs. An AI agent that executes code from untrusted repositories without validation fails this control.

Action Items for Your Team

Audit AI-Assisted Tools: Document what operations AI tools like Cursor or GitHub Copilot perform autonomously. Map each operation to potential security impacts. Consider what happens if these tools process a malicious repository or suggestion.

Implement Repository Trust Levels: Create an explicit trust model:

  • Internal repositories: trusted by default
  • Public repositories from verified organizations: require review
  • Arbitrary external repositories: treat as untrusted, disable autonomous operations

Configure your IDE to enforce these trust levels before AI agents perform any operations.

Disable Git Hooks in AI Workflows: Add core.hooksPath=/dev/null to your Git configuration for projects where AI agents operate, or use Git's --no-verify flag programmatically. If hooks are necessary, run them manually after reviewing their contents.

Require Explicit Consent for AI Operations: Configure AI tools to prompt for approval before executing commands that could have side effects, including Git operations, file system modifications, and network requests.

Sandbox AI Agent Execution: Run AI-assisted operations in containerized or virtualized environments. Tools like Dev Containers provide isolation, ensuring that any malicious hook affects only the sandbox.

Update and Verify: Update to Cursor version 2.5 or later. Verify the patch by creating a test repository with a malicious pre-commit hook and confirming the AI agent doesn't execute it.

Extend Your Threat Model: Update threat models to include "AI agent processes malicious input" as an attack vector. Consider scenarios where your AI assistant clones a repository with malicious code or executes suggested commands without validation.

Monitor AI Agent Activity: Enable logging for all operations your AI tools perform. Gain visibility into what the agent does, especially Git operations and command executions. Alert on unexpected patterns like hook execution or operations on external repositories.

The Cursor vulnerability highlights that AI-driven automation introduces new execution paths. Your security controls must account for non-human actors making autonomous decisions. Review your AI-assisted development tools this week—before someone else finds the next CVE-2026-26268 in your environment.

Topics:Incident

You Might Also Like