Overview of Findings
Security researchers at Wiz spent two years examining AI infrastructure deployments to uncover exploitable vulnerabilities. Their findings reveal that while many teams focus on prompt injection attacks, the real threats lie in model serving layers, data pipelines, inference APIs, and orchestration frameworks. This isn't an isolated issue but a widespread pattern of misconfigurations, exposed endpoints, and inadequate access controls in organizations deploying AI.
Timeline of Research
Months 1-6: Initial investigations showed that organizations often treat AI deployments as experimental, applying fewer security controls than for production systems.
Months 7-12: Common issues included publicly accessible model endpoints, training data in unsecured storage, and API keys in client-side code.
Months 13-18: Multi-tenant AI platforms were found to have isolation failures, allowing cross-customer data access through inference APIs.
Months 19-24: Researchers documented supply chain vulnerabilities in popular ML frameworks, where compromised dependencies could introduce malicious code into training pipelines.
Key Security Failures
Access Control Issues
Model serving endpoints often lacked authentication. In many cases, inference APIs relied on obscurity rather than robust security measures, allowing unauthorized access to sensitive data.
Data Classification Lapses
Organizations frequently did not classify training datasets and model outputs according to data governance policies, leading to exposure of PII, financial records, and proprietary information.
Network Segmentation Deficiencies
AI infrastructure was often located in the same network segments as development environments, enabling lateral movement and exposing model training clusters to unauthorized access.
Supply Chain Oversight
Teams deployed pre-trained models from public repositories without verifying their integrity or maintaining a software bill of materials (SBOM) for ML dependencies.
Lack of Monitoring
Most deployments lacked logging for model access patterns or unusual query sequences, leaving security teams blind to potential attacks.
Compliance Standards Overview
Annex A.8.2 requires managing privileged access rights, which includes model endpoints. Control A.5.15 mandates consistent access control policies across all systems, including AI inference engines.
NIST Cybersecurity Framework v2.0
PR.AC-4 emphasizes managing access permissions with least privilege principles. PR.DS-5 requires data leak protections, necessitating classification and control of model training data and outputs.
CC6.1 requires logical and physical access controls for information assets, including AI infrastructure. CC7.2 demands monitoring for anomalies, such as unusual model query patterns.
OWASP ASVS v4.0.3
Section 4.1 requires function-level access control for API endpoints, including model serving endpoints. Section 8.3 mandates identifying and classifying sensitive data, applying appropriate controls.
Actionable Steps for Your Team
Secure AI Infrastructure as Production Systems
- Implement authentication on all model endpoints (OAuth 2.0, API keys, mutual TLS)
- Use web application firewalls (WAFs) in front of inference APIs
- Enable audit logging with compliance-matching retention periods
Classify and Protect Training Data
- Apply your data classification framework to AI training datasets
- Tag datasets with sensitivity levels and apply matching access controls
- Encrypt training data at rest and in transit
- Document data lineage from source to deployed models
Enhance Network Segmentation
- Isolate AI infrastructure in dedicated VPCs or network segments
- Require bastion hosts or VPN access for administrative functions
- Implement microsegmentation and zero-trust principles
Strengthen ML Supply Chain Security
- Maintain an SBOM for all ML frameworks and models
- Scan ML dependencies for vulnerabilities
- Verify signatures and checksums for models from public sources
- Establish an approval process for new ML dependencies
Deploy AI-Specific Monitoring
- Log all inference requests with query content and response metadata
- Alert on unusual query volumes or patterns
- Monitor for prompt patterns linked to attacks
- Track model performance metrics for signs of compromise
Update Your Threat Model
- Include AI-specific attack vectors in your threat modeling
- Address model theft, training data extraction, model poisoning, and adversarial inputs
By recognizing AI deployments as critical information systems, you can close security gaps quickly. The standards already require these measures — it's time to apply them to your AI infrastructure. AI Security Guidelines



