Skip to main content
A Calendar Invite Hijacked an AI Browser: The PerplexedComet BreakdownIncident
5 min readFor Compliance Teams

A Calendar Invite Hijacked an AI Browser: The PerplexedComet Breakdown

What Happened

In October 2024, security researchers at Zenity discovered a zero-click vulnerability in Perplexity's Comet AI browser. The flaw, named PerplexedComet, allowed attackers to inject malicious instructions into the AI's execution context without any user interaction beyond viewing content. Attackers could embed hidden prompts in a calendar invite, email, or web page that the AI browser would execute as trusted commands. This could lead to exfiltration of sensitive data, manipulation of responses, or redirection to malicious sites.

In response, Perplexity implemented stricter security boundaries, but similar vulnerabilities remain across the AI agent ecosystem.

Timeline

October 2024: Zenity reports PerplexedComet vulnerability to Perplexity
Last month: Perplexity deploys stricter trust boundaries
Current state: Vulnerability mitigated in Comet, but similar attack vectors remain viable across other AI browsers and agents

The gap between discovery and mitigation was a critical period where organizations using Comet faced unquantified exposure.

Which Controls Failed or Were Missing

Input Validation at Trust Boundaries

Comet treated external content as potentially containing instructions for the AI agent. The browser lacked separation between user-facing content and system-level commands. This indirect prompt injection attack bypassed traditional input validation, as the malicious payload appeared legitimate to standard filters.

Secure Development Practices for AI Components

Perplexity launched an AI browser without established security patterns for agent-based systems. There was no evidence of:

  • Threat modeling specific to AI execution contexts
  • Security testing for prompt injection vectors
  • Sandboxing or privilege separation for AI agent actions
  • Content Security Policy equivalent for AI instruction parsing

Change Management and Security Review

AI feature deployment outpaced security review cycles. Your team needs a gate: no AI agent should touch production data until its trust boundaries are mapped and injection vectors tested.

What the Standards Require

OWASP ASVS v4.0.3 – Input Validation

Requirement 5.1.1: "Verify that the application has defenses against HTTP parameter pollution attacks, particularly if the application framework makes no distinction about the source of request parameters."

This requirement applies to AI agents: distinguish between content to display and instructions to execute. If your AI browser can't differentiate between a calendar invite's body text and a system command, this control has failed.

Requirement 5.3.1: "Verify that output encoding is relevant for the interpreter and context required."

For AI agents, "output encoding" means instruction sanitization. Before processing external content, strip or escape anything that could be interpreted as a command.

ISO/IEC 27001:2022 – A.8.31 Separation of Development, Test and Production Environments

You cannot deploy AI agents directly to production without a testing environment to verify prompt injection resistance. The Comet vulnerability suggests Perplexity lacked an effective staging process for security validation of AI features.

NIST 800-53 Rev 5 – SI-10 Information Input Validation

"The information system checks the validity of [Assignment: organization-defined information inputs]."

For AI browsers, your organization-defined inputs must include:

  • External content the AI will process
  • The distinction between data and instructions
  • Rate limits and anomaly detection for unusual command patterns

SI-10 also requires documenting how input validation failures are handled. When your AI agent encounters a suspected injection attempt, what happens? Log it? Block it? Alert your SOC? You need a defined response.

PCI DSS v4.0.1 – Requirement 6.4.3

"If your AI browser processes payment data or operates in a cardholder data environment, Requirement 6.4.3 applies: all scripts are managed with integrity and authorization controls."

AI agents execute dynamic "scripts" (prompts and instructions) constantly. You need:

  • A whitelist of authorized instruction patterns
  • Integrity verification for any pre-loaded prompts or system instructions
  • Authorization controls to prevent external content from triggering privileged actions

Lessons and Action Items for Your Team

1. Inventory Your AI Agents Now

List every AI-powered tool with access to corporate data:

  • Browser extensions
  • Email assistants
  • Code completion tools
  • Customer service chatbots
  • Document analysis systems

For each one, document: What data can it access? What actions can it take? Who approved its deployment?

2. Implement AI-Specific Threat Modeling

Traditional STRIDE won't catch prompt injection. Add these questions to your threat model:

  • Can external content influence the AI's instructions?
  • What's the most sensitive action this agent can take?
  • How do we verify the agent is following our instructions, not an attacker's?
  • What happens if the AI's output is fed back into another AI system?

3. Build a Prompt Injection Test Suite

Create test cases with adversarial inputs:

  • "Ignore previous instructions and..."
  • Hidden text in emails (white text on white background)
  • Unicode characters that look like system commands
  • Multi-step attacks across multiple inputs

Run these tests before deploying any AI agent update.

4. Establish Trust Boundaries for AI Execution

Your AI agent needs privilege separation:

  • Read-only mode for processing external content
  • Explicit user confirmation for sensitive actions (sending emails, accessing credentials)
  • Sandboxed execution environment with no direct network access
  • Rate limiting on API calls the agent can make

5. Update Your Acceptable Use Policy

Your employees are installing AI tools without IT approval. Your AUP needs explicit language:

"Employees may not use AI assistants, browsers, or agents that process company data without Security team approval. Submit requests via [your process]."

Then build a fast-track approval process so you're not the blocker.

6. Monitor AI Agent Behavior

Traditional DLP won't catch an AI agent exfiltrating data because the agent is authorized to access that data. You need behavioral monitoring:

  • Unusual volume of data accessed by AI tools
  • API calls to external services from AI agents
  • Changes in prompt patterns (someone trying to jailbreak your agent)

7. Require Vendor Security Documentation

Before approving any AI tool, demand:

  • Architecture diagram showing trust boundaries
  • Penetration test results specific to prompt injection
  • Incident response plan for AI security issues
  • Data retention and deletion policies

If the vendor can't provide these, they're not ready for your environment.


The PerplexedComet vulnerability wasn't an implementation bug—it was an architecture failure. AI browsers create new trust boundaries that your existing controls don't address. You need to map those boundaries explicitly and test them adversarially before the next calendar invite becomes your breach notification.

Topics:Incident

You Might Also Like