The Problem
A mid-sized fintech company's security team spent three weeks triaging 847 vulnerabilities flagged by their static analysis tool during a pre-release audit. After manual investigation, they found only three findings represented actual exploitable risks in their production code. The remaining 844 were theoretically valid but contextually unreachable—such as SQL injection patterns in dead code, XSS vectors in admin-only functions never exposed to user input, and authentication bypasses in deprecated endpoints already blocked at the gateway layer.
The incident didn't lead to a breach, but it exposed a critical flaw: the security tool couldn't differentiate between "code contains a pattern" and "this pattern is exploitable in your environment." Development teams lost trust in security findings. The three real vulnerabilities—a JWT validation bypass, an unpatched dependency in a critical payment flow, and a race condition in account creation—almost went unnoticed because engineers assumed they were more false positives.
Timeline of Events
Week 1, Monday: Static analysis scan completes. Tool flags 847 findings across severity levels. Security team begins manual triage.
Week 1, Friday: Team reviews 200 findings. 198 marked as false positives or unreachable. Developers start questioning the value of the exercise.
Week 2: Security analysts spend 40+ hours reviewing code paths, deployment configs, and network topology to determine reachability. Development team misses a sprint deadline.
Week 3, Wednesday: Final triage complete. Three exploitable vulnerabilities identified. Fixes deployed.
Week 3, Friday: Retrospective reveals the core issue: the tool had no understanding of runtime context, data flow, or architectural boundaries.
Where Controls Failed
The failure wasn't in the scanning tool itself—it performed as designed. The breakdown occurred in three areas:
1. Lack of Contextual Analysis
The static analyzer evaluated code in isolation. It couldn't determine if user input actually reached a function or if an endpoint was accessible from the internet. Every pattern match generated an alert, regardless of exploitability in the application's architecture and data flows.
2. Missing Automated Reachability Analysis
The team lacked a mechanism to automatically determine which code paths were reachable from attacker-controlled entry points. Manual analysis required deep knowledge of the codebase, infrastructure configuration, and deployment architecture—knowledge that didn't scale across 847 findings.
3. No Integration with Runtime Context
The static analyzer operated independently from infrastructure-as-code definitions, API gateway configurations, and authentication middleware. It couldn't see that certain "vulnerable" endpoints were behind authentication, or that specific functions were never called in production deployments.
Standards and Requirements
PCI DSS v4.0.1 Requirement 6.3.2 mandates that security vulnerabilities are identified and addressed based on a risk ranking methodology. The standard expects you to "identify security vulnerabilities using reputable sources" and address them "based on the risk they pose." Flooding your team with 844 unreachable findings violates the spirit of risk-based prioritization.
OWASP ASVS v4.0.3, Section 1.14 requires that "the application verifies that components are not vulnerable to known attacks." A finding is only meaningful if it represents actual risk in your deployment.
ISO/IEC 27001:2022 Annex A.8.8 requires organizations to obtain timely information about technical vulnerabilities and evaluate exposure. Evaluating exposure means understanding whether the vulnerability is reachable and exploitable in your specific environment.
NIST 800-53 Rev 5 Control RA-5 requires remediation based on "an assessment of risk." You can't assess risk without understanding attack surface and reachability.
Actionable Steps for Your Team
Prioritize Findings by Reachability
Implement a reachability analysis layer before human triage. Tools that build a Code Property Graph (CPG) can trace data flow from entry points to potential vulnerability sites. If user input never reaches that SQL query, it shouldn't consume your team's time.
Map Your Attack Surface
Document which endpoints are internet-facing, which require authentication, and what data flows through each component. This context should feed directly into your vulnerability assessment process. Infrastructure as Code
Automate Context Gathering
Your security tools should query your infrastructure-as-code, API gateway rules, and authentication middleware automatically. If a "critical" finding exists in a function that's only callable by authenticated admin users, the risk profile changes dramatically.
Measure and Act on False Positives
Track how many findings your team marks as unreachable or unexploitable. If that number exceeds 50%, your tooling needs recalibration. The fintech team's 99.6% noise rate indicates a fundamental tooling problem, not a triage problem.
Integrate AI-Driven Analysis
Agentic AI models that map out how vulnerabilities could realistically be exploited in your environment can reduce manual analysis time. These systems reason about code behavior, data flow, and architectural constraints.
Rebuild Developer Trust
After an incident like this, development teams stop taking security findings seriously. Run a calibration exercise: have security and development jointly review 20 findings together, discussing why each does or doesn't represent real risk. Use this to tune your tooling and rebuild collaborative relationships.
Define "Exploitable" in Your Environment
Create explicit criteria: a finding is exploitable if (1) attacker-controlled input can reach the vulnerable code, (2) no intermediate controls prevent exploitation, and (3) successful exploitation impacts confidentiality, integrity, or availability of production data. Document these criteria and apply them consistently.
The fintech team eventually implemented contextual reachability analysis and reduced their false positive rate to 12%. More importantly, they caught two critical vulnerabilities in the next release that their old tooling would have buried under noise. Your security program should identify real threats, not generate work that doesn't reduce risk.



