What Happened
In early 2025, CISA and four international cybersecurity agencies issued a joint advisory warning about a specific failure mode: agentic AI systems operating with excessive permissions and insufficient oversight. The Australian Cyber Security Centre, Canadian Centre for Cyber Security, New Zealand's National Cyber Security Centre, and the UK's National Cyber Security Centre joined CISA to highlight this issue.
The advisory describes a pattern seen across deployments: AI agents granted broad access to systems, networks, and data without the granular controls applied to other privileged accounts. This failure is architectural, not hypothetical.
Deployment Pattern
Week 1-2: Your team deploys an AI agent to automate incident response ticket triage. The agent needs read access to your ticketing system and limited write access to update ticket status.
Week 3-8: The agent proves useful. It is granted access to your SIEM for additional context, then to your asset inventory, and finally to your vulnerability scanner to cross-reference findings.
Week 12: The agent now has read access to most of your security stack. A developer requests it automatically create Jira tickets for critical findings, granting it API access to Jira with a service account.
Week 16: That service account has the same permissions as the creator, including admin rights to several boards and the ability to transition tickets through any workflow state.
Week 20: An attacker discovers a prompt injection vector in the agent's input processing. They craft a ticket description that causes the agent to exfiltrate data through its Jira API access, disguised as legitimate ticket updates.
Your incident response plan did not account for "AI agent compromised by prompt injection."
Which Controls Failed or Were Missing
Least Privilege Violation: The agent operated with cumulative permissions granted over time, not permissions mapped to specific functions. When you granted Jira access, you used an existing service account instead of creating a purpose-limited credential.
Missing Isolation: The agent had direct access to multiple systems. A compromised agent could pivot between your ticketing system, SIEM, asset inventory, and vulnerability scanner without additional authentication.
No Continuous Monitoring: You monitored the agent's performance metrics but not its access patterns. There were no alerts for unusual API calls, access to sensitive data stores, or lateral movement between systems.
Absent Human-in-the-Loop Controls: The agent could autonomously create, modify, and close tickets. No approval gate existed for actions that modified data or triggered workflows in connected systems.
Inadequate Input Validation: The agent processed natural language input from tickets without sanitization or validation. Prompt injection wasn't considered a threat because the agent was viewed as an internal tool, not an attack surface.
What the Relevant Standards Require
The CISA advisory emphasizes least privilege, isolation, and continuous monitoring with human oversight. Map these to your existing compliance obligations:
NIST 800-53 Rev 5, AC-6 (Least Privilege): "Employ the principle of least privilege, allowing only authorized accesses for users (or processes acting on behalf of users) that are necessary to accomplish assigned organizational tasks." Your AI agent is a process acting on behalf of users. It needs function-specific permissions, not inherited admin rights.
ISO/IEC 27001:2022, Control 8.2 (Privileged Access Rights): Requires allocation and use of privileged access rights to be restricted and controlled. Service accounts for AI agents are privileged accounts. They need the same controls: regular review, MFA where possible, logging of all actions.
NIST CSF v2.0, PR.AC-4: "Access permissions and authorizations are managed, incorporating the principles of least privilege and separation of duties." If your agent can both read sensitive data and write to external systems, you've violated separation of duties.
SOC 2 Type II, CC6.1: Requires logical access controls including restriction of access rights to authorized users. Your AI agent is a user. Its access rights need documentation, approval, and periodic review just like human accounts.
The advisory's emphasis on human-in-the-loop control maps to several standards' requirements for approval workflows on sensitive actions. PCI DSS Requirement 7.2.2 requires that access to privileged functions be assigned based on job responsibilities—and for AI agents, that means defining which functions genuinely need automation versus which need human approval.
Lessons and Action Items for Your Team
Audit existing AI agent permissions today. List every system each agent can access and every action it can perform. Compare that list to the agent's documented purpose. Remove everything that isn't strictly necessary for its core function.
Create purpose-limited service accounts. Stop using developer or admin accounts for AI agent authentication. Create a new service account for each agent with only the permissions required for its specific tasks. If your agent needs to read from your SIEM and write to your ticketing system, those should be two separate credentials with two separate permission sets.
Implement input validation and sanitization. Treat every input to your AI agent as untrusted, even if it comes from internal systems. Parse and validate structured data. Sanitize natural language input before processing. Build a whitelist of allowed actions and reject anything that falls outside it.
Add monitoring for agent behavior, not just performance. Track which systems your agent accesses, when, and what data it touches. Alert on access patterns that deviate from baseline: accessing new systems, unusual API call volumes, or data exfiltration indicators like large outbound transfers.
Define human approval gates. Identify actions that should never be fully automated: privilege escalation, access to customer data, modifications to security controls, external communications. Require human approval before your agent can execute these actions.
Document your agent's capabilities and limitations. Understand what your agent can actually do, not just what you intended it to do. Document its access, its decision logic, and its failure modes. Include this documentation in your incident response runbooks.
Test for prompt injection. Add adversarial testing to your AI agent security program. Can an attacker manipulate the agent through crafted input? Can they cause it to access unauthorized systems or exfiltrate data? Test before an attacker does.
The CISA advisory isn't predicting the future—it's describing the present. AI agents are already deployed with excessive permissions and insufficient oversight. The question isn't whether this will cause incidents. It's whether you'll fix the architecture before or after your first breach.



