A critical vulnerability in Hugging Face's LeRobot platform highlights the overlooked risks of using unsafe serialization formats in open-source projects. CVE-2026-25874, scoring 9.3 on CVSS, allows unauthenticated remote code execution through Python's pickle format. Version 0.4.3 remains vulnerable as of this writing, with a fix planned for version 0.6.0.
This vulnerability isn't the result of a sophisticated attack but a fundamental design flaw that your code review process should catch.
What Happened
LeRobot's async inference pipeline accepts serialized data using Python's pickle format without authentication or validation. An attacker can send a malicious pickle payload to the inference endpoint and execute arbitrary code on the host system. No credentials are required—just a POST request with a crafted payload.
The vulnerability lies in the deserialization path where the platform loads model data and inference requests. Because pickle can serialize arbitrary Python objects—including executable code—any untrusted input becomes a remote code execution vector.
Timeline
The exact discovery date isn't publicly documented, but the vulnerability affects LeRobot 0.4.3. Hugging Face acknowledged the issue and committed to a fix in version 0.6.0. The gap between disclosure and patch creates a window where deployed instances remain exploitable.
If you're running LeRobot 0.4.3 in any environment that accepts external input, you have an unauthenticated RCE vulnerability right now.
Which Controls Failed
Input validation collapsed entirely. The platform accepts serialized pickle data without verifying the sender's identity or inspecting the payload structure. This violates the basic principle of secure input handling: never deserialize untrusted data using formats that can execute code.
Authentication was bypassed or absent. The async inference pipeline processes requests without requiring valid credentials. Even if you've secured other parts of your deployment, this endpoint remains open.
Network segmentation didn't contain the risk. If LeRobot runs in an environment with access to sensitive systems or data, an attacker who exploits this vulnerability inherits those privileges. The service account running LeRobot becomes the attacker's foothold.
Dependency security wasn't evaluated. Teams deploying LeRobot likely didn't audit the serialization mechanisms in their threat model. When you use an open-source ML platform, you're inheriting its security posture—including choices about pickle that predate your deployment.
What the Standards Require
OWASP ASVS v4.0.3 Requirement 5.5.3 states: "Verify that deserialization of untrusted data is avoided or is protected in both custom code and third-party libraries." Pickle fails this test by design. The format cannot safely deserialize untrusted input because its purpose is to reconstruct arbitrary Python objects, including those that execute code during deserialization.
PCI DSS v4.0.1 Requirement 6.2.4 mandates addressing vulnerabilities based on risk ranking. A CVSS 9.3 unauthenticated RCE qualifies as critical. If your LeRobot deployment processes or has access to cardholder data, you're out of compliance until you've patched, mitigated, or removed the vulnerable component.
NIST 800-53 Rev 5 Control SI-10 requires your systems to check information inputs for validity. Accepting pickle-serialized data from unauthenticated sources violates this control. The mitigation isn't just input validation—it's format selection. You cannot validate pickle inputs securely because the format itself is the vulnerability.
ISO 27001 Annex A.8.24 addresses secure coding practices. Using pickle for data interchange in a networked service demonstrates a gap in secure development training. Your developers need to understand that some libraries and formats are unsuitable for production use, regardless of convenience.
Lessons and Action Items for Your Team
Audit your serialization formats today. Search your codebase for pickle.load, pickle.loads, and similar deserialization calls. Every instance that processes external input is a potential RCE vector. Replace pickle with JSON, Protocol Buffers, or MessagePack for data interchange. Reserve pickle—if you use it at all—for trusted, internal state persistence where you control both serialization and deserialization.
Require authentication on all inference endpoints. Even internal ML services need authentication. Use API keys, mutual TLS, or service mesh authentication to verify callers. The assumption that "internal" means "trusted" breaks down the moment an attacker gains any network access.
Implement network segmentation around ML services. Your inference endpoints should run in isolated network zones with explicit firewall rules. If an attacker exploits a vulnerability in your ML platform, they shouldn't automatically gain access to databases, internal APIs, or production systems. Defense in depth matters when you're running code that processes untrusted input.
Add serialization security to your threat model template. When you evaluate new dependencies or design new services, explicitly document: What serialization formats does this use? Can attackers control the serialized input? What happens during deserialization? This should be a standard question in your security review checklist, alongside SQL injection and authentication bypass checks.
Establish a vulnerability response timeline for open-source dependencies. LeRobot's fix is planned for 0.6.0, but there's no published release date. You need a policy: How long will you wait for an upstream patch before you fork, patch internally, or remove the dependency? For a CVSS 9.3 RCE, your timeline should be measured in days, not months.
Contribute security findings to open-source projects you depend on. If you discover vulnerabilities in your dependencies, report them. If you develop mitigations or patches while waiting for official fixes, contribute them upstream. Your security team has a stake in the health of the open-source ecosystem you're building on.
The LeRobot vulnerability isn't exotic. It's the predictable outcome of using an unsafe serialization format in a networked service. Your threat model should catch this class of vulnerability before code reaches production. If it didn't, you need better security review processes—and you need them before the next open-source dependency introduces the next pickle-based RCE.



