What Happened
Intruder's security team scanned over 1 million exposed AI services across the public internet, uncovering widespread security failures in AI infrastructure. The scan revealed that 31% of 5,200+ Ollama servers responded to a simple "Hello" prompt without requiring any authentication. These were production AI services processing queries with no access controls.
The team also found exposed instances of ClawdBot, a self-hosted AI assistant, running with default configurations that allowed unauthenticated access to conversation histories, system prompts, and API endpoints. Multiple AI management platforms were found broadcasting their dashboards to the internet with either no authentication or easily guessable default credentials.
Timeline
This was not a single breach event. Intruder conducted their scan over a period in 2024 and found these services already exposed and operational. The security failures existed at deployment time, indicating organizations installed these AI tools without implementing basic access controls.
The pattern suggests that teams are rushing to deploy AI capabilities, often spinning up services in hours without following standard security review processes.
Which Controls Failed or Were Missing
Authentication and authorization controls were absent. The Ollama servers responding without authentication had no mechanism to verify who was sending prompts or what data they should access. Anyone with the IP address could interact with the models, potentially extracting training data, manipulating outputs, or using the compute resources for unauthorized purposes.
Default configuration hardening was neglected. ClawdBot and similar tools ship with default settings optimized for ease of setup, not security. The exposed instances Intruder found were running with:
- Default API keys still active
- Administrative interfaces bound to 0.0.0.0 (all network interfaces)
- No TLS/SSL encryption on API endpoints
- Verbose error messages exposing internal paths and configuration details
Network segmentation was missing. These AI services were directly exposed to the internet rather than placed behind VPNs, bastion hosts, or API gateways with proper authentication layers. There's no technical reason for an internal AI assistant to have a public IP address, yet organizations deployed them this way.
Change management processes were bypassed. Standard deployment procedures—which would include security review, configuration audits, and network architecture approval—were clearly not applied to these AI service deployments.
What the Relevant Standards Require
ISO/IEC 27001:2022 Control 5.15 requires authentication for system access. The control explicitly states that access to systems and applications must be authenticated. Running an API endpoint that processes requests without verifying the requester's identity is a direct violation.
NIST 800-53 Rev 5 Control AC-2 mandates account management, including "conditions for group and role membership." An AI service that accepts anonymous queries has no account management at all.
PCI DSS v4.0.1 Requirement 7.2.1 states: "Access to system components and data is assigned to users based on job classification and function." If your AI service processes payment card data (or connects to systems that do), you need role-based access control.
SOC 2 Type II Common Criteria CC6.1 requires logical and physical access controls to protect system resources from unauthorized access. Default configurations with no authentication won't pass.
The standards also require secure defaults. NIST 800-53 CM-7 calls for least functionality—systems should be configured to provide only essential capabilities. Shipping an AI service with authentication disabled by default violates this principle, but the responsibility for enabling it falls on your team at deployment.
Lessons and Action Items for Your Team
Inventory your AI services today. You likely have more than you think. Developers spin up Ollama instances for testing, data scientists deploy Jupyter notebooks with LLM integrations, and product teams install AI assistants for customer support. Create a registry that includes:
- Service name and purpose
- Network location (internal/DMZ/public)
- Authentication method
- Data classification of inputs and outputs
- Owner and deployment date
Treat AI services like any other application. Your existing security requirements apply. If you require multi-factor authentication for your CRM, your AI chatbot needs it too. If you segment your database servers from the internet, your LLM inference endpoints belong behind the same boundaries.
Build a secure-by-default deployment template. Create a standardized configuration for AI services that includes:
- Authentication required (integrate with your existing IdP)
- TLS 1.3 for all API endpoints
- Network placement in your internal zone, not the DMZ
- Logging enabled and forwarded to your SIEM
- Rate limiting to prevent resource exhaustion
Make this template the only approved path for deploying AI services. Developers who need to move faster can use it.
Review your change management policy. If AI services are bypassing your standard deployment process, close that gap. The policy should explicitly state that AI/ML services, regardless of deployment method (containers, serverless, SaaS), require the same security review as traditional applications.
Audit existing deployments this week. For each AI service in your inventory:
- Attempt to access it without credentials
- Check if it's exposed to the internet (Shodan and Censys will show you what attackers see)
- Verify that authentication integrates with your directory service
- Confirm that API keys are rotated and not set to default values
- Review logs for suspicious access patterns
Document what you find. If you discover services with no authentication, that's a critical finding that needs immediate remediation.
The Intruder scan proved that organizations are deploying AI faster than they're securing it. Your job is to close that gap before someone else finds what they found.



