Overview of the Vulnerability
Unit 42 researchers have identified a vulnerability in Google Cloud Vertex AI's permission model. This flaw allows attackers with compromised credentials to escalate privileges and exfiltrate data across an entire project. The root cause is that Vertex AI's default service agents are granted overly broad permissions, violating the principle of least privilege.
In their proof-of-concept, Unit 42 showed that stolen credentials could be used to gain unrestricted read access to all Google Cloud Storage buckets within a project. This attack doesn't require sophisticated techniques—just the ability to use Vertex AI's legitimate features with service accounts that have excessive access.
Understanding the Design Flaw
This issue is not tied to a specific incident timeline but represents a design flaw. The excessive permission scoping is a default setting in the platform's architecture. Unit 42's findings highlight that this configuration flaw affects any Google Cloud project using Vertex AI with default service agent permissions.
Failed Controls and Missing Safeguards
Excessive Default Permissions: Vertex AI service agents are automatically given broad access scopes that exceed their functional needs. When you create a Vertex AI resource, service accounts are generated with permissions that span multiple services and storage locations within your project.
Lack of Permission Boundaries: The default setup lacks resource-level access controls. A service agent needing access to one specific bucket instead receives access to all buckets in the project, fundamentally violating the least privilege principle.
Insufficient Credential Isolation: Service agents operate with permissions that cross security boundaries. If credentials for one AI workflow are compromised, an attacker can access data from unrelated workloads within the same project.
Monitoring Gaps for Privilege Escalation: Organizations using default configurations may lack visibility into how service agents use their permissions. While Google Cloud audit logs capture API calls, detecting abuse requires understanding what "normal" usage looks like for each service agent, which is challenging when permissions are overly broad.
Compliance Requirements
ISO/IEC 27001:2022 Annex A.9.2.3 mandates restricting and controlling privileged access rights. Service accounts, considered privileged identities, must have documented permissions and undergo regular reviews.
NIST 800-53 Rev 5 AC-6 emphasizes granting only the minimum necessary privileges. The control enhancement AC-6(1) requires that access to security functions and information be authorized only for explicitly approved personnel or processes.
PCI DSS v4.0.1 requires that access to system components and data is based on job classification and function, adhering to the least privilege principle. Default service agent permissions can create compliance gaps if Vertex AI workloads involve cardholder data environments.
SOC 2 Type II CC6.3 requires restricting logical access through access control software and rule sets. For cloud service accounts, this means implementing custom IAM policies rather than relying on platform defaults.
Actionable Steps for Your Team
Implement Custom Service Accounts: Use Google Cloud's Bring Your Own Service Account (BYOSA) to replace default service agents with custom service accounts. This allows you to manage permissions through your IAM policies. For each Vertex AI resource, create a dedicated service account with permissions limited to the necessary resources.
Audit Vertex AI Deployments: Use gcloud projects get-iam-policy to identify service accounts with names containing "gcp-sa-aiplatform". Review permissions for each account, documenting which Vertex AI resources use it, which data stores it needs access to, and whether current permissions exceed that scope.
Segment AI Workloads: Use separate Google Cloud projects to isolate AI workloads with different data sensitivity levels. This limits the impact if credentials are compromised, preventing attackers from accessing unrelated data stores.
Monitor Service Account Changes: Set up automated alerts for new service account creations or IAM policy changes. Real-time visibility into permission changes is crucial for maintaining security.
Document Permission Models: Maintain a control document in your Information Security Management System (ISMS) that covers AI service account management. Include details on determining minimum required permissions, review frequency, approval processes, and detection of unauthorized privilege escalation.
Test Incident Response Plans: Conduct tabletop exercises to simulate compromised service accounts. Ensure your team can quickly identify accessible resources, revoke credentials, and rotate keys without disrupting production workflows.
The Vertex AI permission model vulnerability is a design pattern that requires proactive management. Default configurations prioritize ease of deployment over security. Your responsibility is to implement the controls required by ISO/IEC 27001:2022, NIST 800-53 Rev 5, and PCI DSS v4.0.1, even when the platform does not enforce them.



