Your AI browser agent just accessed your local file system, read your credentials, and sent them to an attacker. You didn't click anything. You didn't approve any action. The agent did it autonomously—because that's what it was designed to do.
What Happened
Zenity Labs disclosed PleaseFix, a class of vulnerabilities affecting agentic browsers like Perplexity Comet. These flaws allow attackers to hijack AI agents, access local files, and steal credentials within authenticated user sessions—all without user interaction.
Unlike traditional browser exploits that require phishing clicks or social engineering, PleaseFix works because the AI agent operates autonomously. The vulnerability isn't in broken code. It's in the trust model itself.
Timeline
The disclosure followed responsible practices:
- Discovery: Zenity Labs identified the vulnerability class during research into agentic browser security models.
- Disclosure: Researchers notified Perplexity and other affected vendors privately.
- Remediation: Perplexity addressed the vulnerability in Comet before public disclosure.
- Publication: Zenity Labs published findings after vendors had time to patch.
According to Michael Bargury, CTO of Zenity, the vulnerability exploits the autonomous nature of AI agents themselves—not a traditional implementation flaw.
Which Controls Failed or Were Missing
Authentication and Authorization Boundaries
Traditional browsers maintain clear boundaries between user intent and system action. Agentic browsers blur this boundary by design. The agent acts on your behalf based on interpreted intent, not explicit approval for each action.
The PleaseFix vulnerabilities exploited this gap:
- Zero-click agent compromise granting file system access
- Data exfiltration through legitimate agent capabilities
- Credential theft within authenticated sessions
Input Validation at the Intent Layer
Your web application firewall validates HTTP requests. Your browser validates scripts. But who validates the natural language instructions your AI agent receives and interprets?
The first PleaseFix exploit succeeded because there was no effective validation layer between external input and agent action. The agent treated malicious instructions as legitimate tasks.
Least Privilege for Autonomous Actions
The agent had access to the local file system and authenticated sessions because it needed those capabilities to function. But there was no mechanism to restrict which files it could access or which credentials it could read based on the context of the request.
What the Standards Require
OWASP ASVS v4.0.3 Requirement 4.1.1: "Verify that the application enforces access control rules on a trusted service layer."
Agentic browsers need a new trust boundary. Your agent operates between the user and external services, but current frameworks don't account for autonomous intermediaries that make security-relevant decisions.
PCI DSS v4.0.1 Requirement 8.2.1: "Strong authentication for all users must be implemented."
When your agent accesses stored credentials on your behalf, is that "you" authenticating? The standard assumes human actors making explicit choices. Agents break this model.
ISO/IEC 27001 Control 5.15: "Processes and procedures should be established to manage information security risks associated with supplier relationships."
Your AI agent is effectively a supplier—a third party performing actions on your behalf. But unlike traditional suppliers, it operates inside your security perimeter with access to your authenticated sessions and local resources.
NIST CSF v2.0 Function: Protect (PR.AC-4): "Access permissions and authorizations are managed, incorporating the principles of least privilege and separation of duties."
The agent needs broad permissions to be useful. But those permissions weren't scoped to specific, validated tasks. Every capability the agent possessed became available to attackers who could manipulate its instructions.
Lessons and Action Items for Your Team
1. Inventory Your Agentic Systems Now
List every tool that takes autonomous action on behalf of users:
- AI coding assistants with repository access
- Browser agents that can read local files
- Automation tools that access authenticated APIs
- LLM-powered search tools with system permissions
For each one, document: What can it access? What actions can it take? Who validates its decisions?
2. Implement Intent Validation Layers
Build a validation step between agent interpretation and agent action. Before your agent reads a file or accesses a credential, require:
- Explicit scope matching (does this action match the user's stated goal?)
- Resource allowlisting (should this agent ever access this type of file?)
- Anomaly detection (is this request pattern unusual for this user?)
This won't stop all attacks, but it creates a checkpoint where traditional security controls can apply.
3. Scope Agent Permissions by Task Context
Don't give your agent standing access to everything it might need. Grant permissions dynamically based on the specific task:
- File system access only to explicitly shared directories
- Credential access only for services relevant to the current task
- API permissions scoped to the minimum required scope
Treat each agent invocation as a fresh session with limited privileges.
4. Log Agent Actions as Security Events
Your SIEM probably logs user logins and file access. Start logging agent actions with the same rigor:
- What instruction did the agent receive?
- What resources did it access?
- What data did it read or transmit?
- What was the source of the instruction?
Agent actions are privileged operations. Treat them accordingly in your monitoring.
5. Update Your Threat Model
Add these scenarios to your threat modeling exercises:
- Attacker-controlled content reaches your agent through legitimate channels (search results, web pages, API responses)
- Agent misinterprets malicious instructions as legitimate tasks
- Agent's broad permissions enable lateral movement after initial compromise
Your existing controls assume human decision-making at critical points. Agents remove those decision points.
6. Require Vendor Security Disclosures
Before deploying any agentic tool, ask vendors:
- What permissions does the agent require and why?
- How do you validate agent actions before execution?
- What's your process for security research and disclosure?
- Have you conducted third-party security assessments specifically for agentic capabilities?
Perplexity addressed the vulnerability before public disclosure—that's the vendor behavior you want to see.
The PleaseFix vulnerability class isn't going away. As agentic systems become more capable, they'll need more permissions and broader access. Your security model needs to evolve before your agents do.
AI security challenges



