Skip to main content
Category: Attack Techniques

Server-Side Request Forgery

Also known as: SSRF, Server Side Request Forgery
Simply put

Server-Side Request Forgery is a security vulnerability in which an attacker tricks a web application or API into making network requests on the attacker's behalf. Because these requests originate from the server itself, they may reach internal resources that are not directly accessible to the attacker. This can expose internal admin panels, cloud metadata endpoints, or other restricted systems within the server's network.

Formal definition

SSRF is a vulnerability class (catalogued as CWE-918) in which insufficient validation of attacker-supplied input causes a server-side application to issue HTTP or other network requests to unintended destinations. The forged requests originate from the server's own network context, allowing an attacker to bypass perimeter controls and interact with internal or restricted resources such as internal administrative interfaces, cloud instance metadata services, or other backend systems that rely on network-layer trust rather than application-layer authentication. The attack surface typically arises when user-controlled data is used to construct URLs or network targets that the application then fetches without adequate allowlist enforcement or request isolation.

Why it matters

SSRF is significant because it allows an attacker to weaponize a server's own network position against the infrastructure it sits within. Servers typically have privileged access to internal services, cloud metadata endpoints, and backend systems that are not reachable from the public internet. When an attacker can direct server-side requests to those destinations, perimeter controls such as firewalls and network segmentation lose much of their protective value, because the malicious traffic originates from a trusted internal source rather than an external one.

Who it's relevant to

Web Application Developers
Developers building features that fetch remote content, accept user-supplied URLs, or integrate with external services are directly responsible for introducing or preventing SSRF. Proper allowlist enforcement on permitted destinations, avoidance of user-controlled network targets where possible, and request isolation are the primary mitigations that must be applied at the code level.
Security Engineers and AppSec Teams
Security engineers conducting threat modeling, code review, or penetration testing need to identify SSRF-prone patterns across the application's attack surface. Because SSRF impact is heavily dependent on the network environment the server operates in, assessing severity requires understanding what internal resources the server can reach, making runtime and deployment context essential to a complete evaluation.
Cloud and Infrastructure Engineers
In cloud environments, SSRF is particularly consequential because instance metadata services commonly expose credentials and configuration data to any process running on the host. Infrastructure engineers should apply defensive controls at the network layer, such as restricting access to metadata endpoints and enforcing egress filtering, to limit what a server can reach even if an application-layer SSRF vulnerability is exploited.
API Designers and Architects
SSRF is not limited to traditional web applications; APIs that accept URLs or endpoint references as parameters are equally susceptible. Architects should evaluate whether features that require server-initiated outbound requests are necessary and, where they are, design those features with strict destination controls and minimal network privilege from the outset.

Inside SSRF

Attacker-Controlled URL or Destination
The core element of an SSRF vulnerability: user-supplied or externally influenced input that the server uses to construct and issue an outbound request, without sufficient validation or restriction of the target.
Server-Initiated HTTP or Network Request
The outbound request made by the vulnerable server on behalf of the attacker, which may target internal services, cloud metadata endpoints, or other network resources inaccessible directly from the attacker's position.
Internal Network or Metadata Endpoint Exposure
The class of resources that SSRF typically exposes, including private IP ranges, localhost services, and cloud provider instance metadata endpoints such as the AWS EC2 metadata service at 169.254.169.254.
Allowlist or Denylist Validation Controls
The primary mitigation controls applied to outbound request destinations, where allowlists restrict the server to a predefined set of approved targets and denylists attempt to block known dangerous destinations. Allowlists are generally more effective than denylists.
DNS Rebinding and URL Parser Inconsistencies
Bypass techniques relevant to SSRF mitigations, where DNS rebinding causes a hostname to resolve to a disallowed IP after initial validation, and inconsistencies between URL parsers can cause validation logic to evaluate a different destination than the one actually requested.
Blind SSRF
A variant in which the server issues the attacker-influenced request but does not return the response content to the attacker, requiring out-of-band detection methods such as monitoring for unexpected DNS lookups or callbacks to attacker-controlled infrastructure.
Egress Filtering and Network-Level Controls
Defense-in-depth controls applied at the network layer to restrict which external or internal destinations the application server is permitted to reach, complementing application-level validation.

Common questions

Answers to the questions practitioners most commonly ask about SSRF.

Does blocking private IP ranges in a blocklist reliably prevent SSRF attacks?
No. Blocklist-based approaches that filter private IP ranges are typically insufficient on their own because attackers can bypass them using DNS rebinding, alternative IP representations (such as decimal or octal encodings), redirects through allowed hosts, or cloud provider metadata endpoints that may not be covered by the blocklist. Allowlist-based controls, which explicitly permit only known required destinations, are generally more effective than blocklists for SSRF mitigation.
Does SSRF only affect applications that explicitly fetch URLs provided by users?
No. SSRF can occur in any application feature that causes the server to make outbound network requests, even when the destination is not directly supplied by the user. Webhook configurations, PDF or document rendering engines, image processors, XML parsers with external entity support, and integrations that resolve user-supplied hostnames can all be vulnerable. The triggering input does not need to look like a URL to result in a server-side request being made.
How should an application validate URLs or hostnames to reduce SSRF risk in practice?
Applications should validate URLs against an allowlist of permitted schemes, hosts, and ports before making any outbound request. Validation should occur after full URL parsing and normalization, not against the raw input string, to avoid bypass via encoding tricks. Where possible, DNS resolution should be performed and the resulting IP address checked against permitted ranges before the connection is established, though developers should be aware that DNS rebinding can still occur between the validation check and the actual connection.
What network-level controls can reduce the impact of SSRF vulnerabilities?
Egress filtering at the network layer can restrict which destinations a server is permitted to reach, limiting the impact of successful SSRF exploitation. Segmenting internal services so that application servers do not have direct network access to sensitive internal resources reduces the blast radius. Blocking access to cloud provider instance metadata endpoints (such as the link-local address 169.254.169.254) from application processes is particularly important in cloud environments. These controls do not prevent SSRF at the application level but may contain what an attacker can reach.
Can static analysis tools reliably detect SSRF vulnerabilities in source code?
Static analysis tools can identify patterns where user-controlled input flows into HTTP request functions or URL construction, flagging potential SSRF vulnerabilities. However, they typically produce false positives where user input is already validated through paths the tool cannot trace, and false negatives where the data flow passes through indirect sinks, serialization layers, or third-party libraries not covered by the tool's rules. Static analysis cannot determine at the code level whether runtime network controls are in place, so findings require triage with deployment context in mind.
How should SSRF risk be assessed differently for applications deployed in cloud environments versus on-premises environments?
Cloud environments introduce additional SSRF risk because instance metadata services, typically accessible via a well-known link-local address, may expose credentials, configuration data, and identity tokens to any process that can make an outbound HTTP request. This means a successful SSRF in a cloud-hosted application may allow an attacker to retrieve cloud provider credentials and pivot to other services beyond the immediate network. On-premises deployments may not have metadata endpoints but may expose internal APIs, databases, or management interfaces. Risk assessment should account for what services are reachable from the server and what sensitive data or capabilities those services expose.

Common misconceptions

Blocking requests to 127.0.0.1 and localhost is sufficient to prevent SSRF attacks targeting internal services.
Attackers can bypass such denylists using alternative representations of loopback addresses, DNS rebinding, IPv6 equivalents, decimal or octal IP notation, and redirects. A denylist approach alone is generally insufficient; allowlist-based controls combined with network egress filtering provide more reliable protection.
SSRF only poses a risk if the attacker can read the response returned by the forged request.
Blind SSRF, where response content is not returned to the attacker, can still be exploited to map internal network topology, trigger actions on internal services, exfiltrate data via out-of-band channels, or interact with metadata endpoints. The absence of visible response data does not eliminate the risk.
Static analysis tools can reliably detect all SSRF vulnerabilities in an application's codebase.
Static analysis can identify patterns where user-controlled input flows into HTTP request functions, but it typically cannot determine at the code level whether runtime validation, allowlisting, or network controls are effective. False negatives are common when data flows are indirect or when validation occurs in external libraries. Runtime context is required to assess whether specific destinations are actually reachable.

Best practices

Implement allowlist-based validation for all outbound request destinations, restricting the server to a predefined set of approved hostnames or IP ranges rather than relying on denylists of known-bad values.
Resolve and validate the IP address of a target hostname at the time of the request, not only at the time of initial input validation, to reduce exposure to DNS rebinding attacks.
Apply network-level egress filtering to restrict the application server from reaching internal IP ranges, cloud metadata endpoints, and other sensitive network segments, treating this as a defense-in-depth layer rather than a substitute for application-level controls.
Disable or sandbox HTTP redirect following in outbound HTTP clients where possible, or re-validate the destination after each redirect to prevent redirect-based bypass of destination controls.
Use a dedicated, isolated service or proxy for outbound HTTP requests rather than allowing application servers to make arbitrary outbound connections directly, centralizing enforcement of allowlist and logging policies.
Include SSRF-specific test cases in security testing processes, covering blind SSRF scenarios using out-of-band callback detection, as well as common bypass techniques such as alternative IP representations and URL parser inconsistencies.