What Happened
Aqua Security documented a targeted attack campaign that compromised at least sixty Kubernetes clusters through misconfigured role-based access control (RBAC) policies. Attackers gained persistent backdoor access by exploiting overly permissive service account tokens and ClusterRole bindings. Once inside, they maintained access through RBAC manipulation, allowing them to return even after initial remediation attempts.
The attack pattern was methodical: identify clusters with exposed API servers, probe for weak RBAC configurations, establish persistent access through elevated service accounts, and maintain presence through modified role bindings that survived pod restarts and basic cleanup efforts.
Timeline
T+0 to T+20 minutes: Research using Kubernetes honeypots shows that newly deployed clusters face automated reconnaissance within twenty minutes of creation. Attackers scan for exposed API endpoints and authentication weaknesses.
Discovery phase: Attackers identified clusters with publicly accessible API servers and began probing authentication mechanisms, specifically targeting environments where default service account tokens had not been restricted.
Initial compromise: Exploitation of weak RBAC policies allowed attackers to escalate from limited service account access to cluster-admin equivalent permissions.
Persistence establishment: Attackers created custom ClusterRoles and bindings that survived standard incident response procedures. These backdoors persisted across pod deletions and namespace cleanups.
Detection: Aqua Security identified the campaign through anomalous RBAC modification patterns across multiple client environments.
Which Controls Failed or Were Missing
Service account token management: Compromised clusters ran with default service account tokens automatically mounted to every pod, violating the principle of least privilege by giving every workload unnecessary API access.
RBAC policy review: No automated validation caught the creation of overly permissive ClusterRoles. The clusters lacked continuous monitoring of RBAC changes, allowing backdoor roles to persist undetected.
API server exposure: Kubernetes API servers were accessible from the public internet without additional authentication layers. Network segmentation controls that should have restricted API access to trusted networks were either missing or misconfigured.
Admission control: No admission controllers were enforcing policies on service account usage or RBAC modifications. Pod Security Admission or third-party policy engines that could have blocked suspicious role bindings were not implemented.
Audit logging: Even in environments with audit logs enabled, the volume and complexity of RBAC events made it difficult to identify malicious modifications without specific detection rules.
What the Relevant Standards Require
NIST 800-53 Rev 5 AC-2: Account Management requires that organizations "define and document the types of accounts allowed and specifically prohibited for use within the system." For Kubernetes, this means explicitly defining which service accounts need API access and removing automatic token mounting where unnecessary.
NIST 800-53 Rev 5 AC-6: Least Privilege mandates employing "the principle of least privilege, allowing only authorized accesses for users (or processes acting on behalf of users) that are necessary to accomplish assigned organizational tasks." Default service accounts with cluster-wide permissions violate this requirement directly.
ISO 27001 Annex A.9.2: User access management requires regular review of access rights. In Kubernetes, this translates to continuous monitoring of ClusterRoles, Roles, and their bindings. The standard requires that you detect and investigate unauthorized changes to access controls.
PCI DSS v4.0.1 Requirement 7.2.2: Access to system components must be assigned based on job function with least privileges necessary. For containerized environments processing payment data, this means service accounts must have explicitly defined, minimal RBAC permissions.
SOC 2 Type II CC6.3: The entity implements logical access security measures to protect against threats from sources outside its system boundaries. Exposing Kubernetes API servers to the public internet without additional authentication controls fails this criterion.
Lessons and Action Items for Your Team
Disable automatic service account token mounting. Add automountServiceAccountToken: false to your pod specifications by default. Create service accounts with API access only for workloads that explicitly need it. This single change eliminates the most common initial access vector.
Implement RBAC continuous monitoring. Deploy tooling that alerts on ClusterRole and ClusterRoleBinding modifications. Write detection rules for patterns like service accounts gaining cluster-admin permissions or new bindings to high-privilege roles. Tools like Falco or cloud-native SIEM solutions can ingest Kubernetes audit logs and alert on suspicious RBAC changes within seconds.
Restrict API server access. Place your Kubernetes API servers behind a VPN or bastion host. If you must expose them, implement additional authentication through an identity-aware proxy. Configure network policies that allow API access only from known CIDR ranges.
Enable and enforce admission control. Start with Pod Security Admission in enforce mode for at least the baseline profile. For RBAC-specific controls, implement OPA Gatekeeper or Kyverno policies that:
- Block service accounts from gaining cluster-admin or wildcard permissions
- Require approval workflows for ClusterRole modifications
- Prevent service account token mounting in namespaces that don't need it
Conduct RBAC audits. Run kubectl get clusterrolebindings -o json and review every binding to the cluster-admin role. Identify service accounts with * permissions on * resources. Your audit should answer: "If this service account were compromised, what could an attacker do?" If the answer is "everything," you have work to do.
Implement runtime threat detection. Deploy agents that monitor for suspicious API calls from pod contexts. Unusual patterns include pods querying secrets outside their namespace, service accounts creating new roles, or API calls from workloads that shouldn't need API access.
The attack on sixty clusters wasn't sophisticated—it exploited Kubernetes defaults that prioritize convenience over security. Your clusters likely have similar vulnerabilities right now. Start with service account token mounting tomorrow. The reconnaissance bots are already scanning.



