What Happened
In early 2025, security researchers at OX Security disclosed a critical vulnerability in Anthropic's Model Context Protocol (MCP) that allows remote code execution. This flaw affects over 7,000 publicly accessible servers and software packages with more than 150 million downloads. When notified, Anthropic declined to modify the protocol's architecture, stating the behavior was "expected."
The vulnerability arises from how MCP handles server connections. The protocol allows AI assistants to connect to external servers that provide context and tools, but lacks sufficient isolation between trusted and untrusted code execution paths. An attacker controlling an MCP server can execute arbitrary code on any client that connects to it.
Timeline
Discovery Phase: OX Security identified the architectural vulnerability during research into AI supply chain security. They found that MCP's design allows server-side code to execute with the same privileges as the client application.
Disclosure: Researchers notified Anthropic through responsible disclosure channels, documenting the remote code execution vector and its impact across the AI ecosystem.
Vendor Response: Anthropic acknowledged the report but declined to modify the protocol architecture, classifying the behavior as "expected" rather than a vulnerability requiring remediation.
Current State: The vulnerability remains unpatched. More than 7,000 servers and 150 million package downloads remain potentially exploitable. No timeline exists for architectural changes.
Which Controls Failed or Were Missing
Secure Defaults: MCP ships with configurations that trust server-provided code by default. There is no opt-in model for code execution, no sandboxing requirement, and no clear warning to developers about the trust implications of connecting to an MCP server.
Input Validation and Sanitization: The protocol lacks architectural enforcement of input validation between client and server boundaries. Servers can send executable payloads that clients process without sufficient verification.
Least Privilege: MCP servers execute with the full privileges of the client application. There is no privilege separation or restriction on what server-provided code can access.
Supply Chain Verification: The protocol provides no mechanism for clients to verify server identity, validate server code integrity, or establish trust relationships before accepting executable content.
Security Testing in Development: The architectural design appears to have proceeded without threat modeling for malicious server scenarios. The "expected behavior" response suggests security reviews did not treat untrusted servers as a design consideration.
What the Relevant Standards Require
OWASP ASVS v4.0.3 addresses this class of vulnerability directly:
Requirement 5.1.3: "Verify that the application uses memory-safe string, safer memory copy and pointer arithmetic to detect or prevent stack, buffer, or heap overflows." While MCP may not have memory safety issues specifically, the principle applies: unsafe operations require explicit protection.
Requirement 5.2.1: "Verify that deserialization of untrusted data is avoided or is protected in both custom code and third-party libraries." MCP accepts and executes server-provided code, which is a form of deserialization that requires protection.
Requirement 14.2.1: "Verify that all components are up to date, preferably using a dependency checker during build or compile time." This applies to the AI supply chain: your application's security depends on the security of MCP servers you connect to.
NIST 800-53 Rev 5 provides relevant controls:
SC-7 (Boundary Protection): "The information system monitors and controls communications at the external boundary of the system and at key internal boundaries within the system." MCP lacks boundary controls between client and server execution contexts.
SC-18 (Mobile Code): "The organization establishes usage restrictions and implementation guidance for mobile code technologies based on the potential to cause damage to the information system if used maliciously." Server-provided MCP code is mobile code that requires restrictions.
SI-7 (Software, Firmware, and Information Integrity): "The organization employs integrity verification tools to detect unauthorized changes to software." MCP provides no integrity verification for server code.
ISO/IEC 27001:2022 Annex A controls that apply:
A.8.20 (Network Security): Requires security of network services and segregation of networks. MCP's architecture conflates network communication with code execution.
A.8.30 (Outsourcing): "Information security for use of externally provided services shall be maintained." When your AI assistant connects to external MCP servers, you are outsourcing functionality without security guarantees.
Lessons and Action Items for Your Team
Immediate Actions
Inventory your MCP usage. If you're building applications that use Claude or other AI assistants with MCP support, document which MCP servers your systems connect to. Treat each connection as a trust boundary.
Implement allowlisting. Do not allow your applications to connect to arbitrary MCP servers. Maintain an explicit allowlist of servers you control or have audited. Block all other connections at the network level if possible.
Run MCP clients in isolated environments. Use containers, VMs, or sandboxes to limit the impact if a malicious server exploits the RCE vulnerability. Apply least privilege: the account running your MCP client should have minimal system access.
Monitor for unexpected behavior. Log all MCP server connections and watch for unusual patterns: connections to new servers, unexpected code execution, or privilege escalation attempts.
Architectural Changes
Treat MCP servers as untrusted by default. Even if you control the server today, assume it could be compromised tomorrow. Design your client applications to function with degraded capabilities if an MCP server behaves maliciously.
Implement defense in depth. Don't rely on MCP's architecture to protect you. Add your own input validation, rate limiting, and anomaly detection between your application and MCP servers.
Separate concerns. If you need data from an MCP server, retrieve it through an API boundary you control rather than allowing direct MCP protocol connections from your production systems.
Process and Governance
Update your vendor risk assessment process. This incident demonstrates that "expected behavior" from a vendor's perspective may be "critical vulnerability" from yours. When evaluating AI tools and protocols, explicitly assess how they handle untrusted input and code execution.
Document trust boundaries in your AI supply chain. Map which external services your AI systems depend on. For each dependency, document: What code does it execute? What data does it access? What happens if it's compromised?
Add AI supply chain to your threat model. If your organization performs threat modeling (and if you're subject to PCI DSS v4.0.1 Requirement 6.3.2, you must), add scenarios for compromised AI model providers, malicious training data, and vulnerable AI protocols.
The Broader Lesson
When a vendor tells you a remotely exploitable vulnerability is "expected behavior," that's a signal about their security priorities—not a reason to accept the risk. Your compliance obligations don't pause because a protocol designer made an architectural choice that conflicts with secure defaults.
The MCP vulnerability affects 150 million downloads because architectural decisions propagate through ecosystems. Your action items above will protect your specific systems, but the broader lesson is about how you evaluate and adopt emerging technologies. Secure defaults matter. Vendor accountability matters. And "it's supposed to work that way" is never an acceptable answer to remote code execution.
Remote Code Execution



