Your security scanners flag hundreds of findings per pull request. Your developers ignore most of them. Meanwhile, AI code assistants generate more code than your team can manually review. You need a policy that focuses your team on vulnerabilities that actually matter.
This template implements a reachability-first security policy—a systematic approach that prioritizes exploitable vulnerabilities over theoretical risks. Instead of treating every finding equally, your team will focus on code paths that attackers can actually reach from entry points.
Purpose of the Template
This policy template establishes the rules for how your team triages, prioritizes, and remediates security findings in CI/CD pipelines. It shifts your workflow from "fix everything the scanner reports" to "fix what attackers can exploit first."
The template addresses three problems:
- Alert fatigue: Security tools generate dozens of findings per pull request, many irrelevant to your actual risk.
- AI-generated code volume: AI tooling increases code output without matching oversight, creating blind spots.
- Developer friction: Blocking builds for unexploitable findings damages trust between security and engineering teams.
Use this policy when you integrate static analysis, dependency scanning, or any tool that produces high volumes of findings without built-in context about exploitability.
Prerequisites
Before implementing this policy, you need:
- A code property graph (CPG) analyzer or equivalent reachability tool—Options include commercial tools that perform dataflow analysis or open-source solutions that can trace execution paths from entry points to sinks.
- Defined application entry points—HTTP endpoints, CLI commands, message queue consumers, scheduled jobs.
- CI/CD pipeline integration points—Where you'll enforce policy decisions (PR checks, merge gates, production deployment gates).
- Baseline vulnerability inventory—Run your current scanners to understand your starting point.
You don't need perfect coverage to start. Begin with your most critical applications and expand.
The Policy Template
# Reachability-First Security Remediation Policy
Version: 1.0
Effective Date: [DATE]
Review Cycle: Quarterly
## 1. Scope
This policy applies to all application code repositories that:
- Deploy to production environments
- Process customer data or authentication
- Integrate with third-party services via API
## 2. Finding Classification
### 2.1 Tier 1: Reachable and Exploitable
Vulnerabilities where:
- A code path exists from an application entry point to the vulnerable code
- The vulnerability can be triggered by external input
- Exploitation would violate confidentiality, integrity, or availability
**SLA**: Fix within 5 business days
**Build Policy**: Block merge to main branch
**Exception Process**: Requires VP Engineering approval
### 2.2 Tier 2: Reachable but Mitigated
Vulnerabilities where:
- A code path exists from entry point to vulnerable code
- BUT compensating controls exist (WAF rules, input validation, authentication)
- OR exploitation requires multiple chained conditions
**SLA**: Fix within 15 business days
**Build Policy**: Warning only, does not block merge
**Exception Process**: Security team lead approval
### 2.3 Tier 3: Unreachable
Vulnerabilities where:
- No code path exists from any entry point to the vulnerable code
- Dead code, test-only code, or development dependencies
- Code behind feature flags set to permanently disabled
**SLA**: Fix within 30 business days or accept risk
**Build Policy**: Informational only
**Exception Process**: Team lead acknowledgment required
### 2.4 Tier 4: False Positives
Findings that:
- Cannot be reproduced in actual execution
- Result from scanner misunderstanding of framework behavior
- Are marked as false positive by reachability analysis
**SLA**: Document reason, no fix required
**Build Policy**: Suppress in scanner configuration
**Exception Process**: None
## 3. Reachability Analysis Requirements
### 3.1 Entry Point Definition
Each application must maintain an entry point inventory including:
- HTTP routes and their methods
- Scheduled job triggers
- Message queue subscriptions
- CLI commands
- gRPC/GraphQL endpoints
Update this inventory when adding new entry points.
### 3.2 Analysis Execution
Run reachability analysis:
- On every pull request before merge
- Nightly on main branch
- Before production deployments
### 3.3 Analysis Scope
Reachability analysis must trace:
- Direct function calls
- Framework routing (HTTP handlers, middleware)
- Dependency calls (libraries, internal packages)
- Data flow from entry points to security-sensitive sinks
## 4. Developer Workflow Integration
### 4.1 Pull Request Checks
PR must pass:
1. Standard security scanner (SAST/SCA)
2. Reachability analysis
3. Tier 1 finding check (must be zero)
Developers receive:
- Finding count by tier
- Code path visualization for Tier 1 findings
- Remediation guidance specific to the vulnerability type
### 4.2 Finding Ownership
- Tier 1: Security team and code author jointly responsible
- Tier 2-3: Code author responsible, security team advisory
- Tier 4: Security team documents suppression reason
## 5. Metrics and Reporting
Track monthly:
- Total findings by tier
- Mean time to remediation by tier
- False positive rate
- Developer time spent on security findings
- Tier 1 findings that reached production
Report quarterly to engineering leadership.
## 6. AI-Generated Code Provisions
For code generated by AI assistants:
- Apply same reachability analysis as human-written code
- Flag AI-generated code that introduces new entry points for manual review
- Require human review of any AI-generated code handling authentication or authorization
## 7. Exceptions and Risk Acceptance
Risk acceptance for Tier 1-2 findings requires:
- Written justification of business need
- Documented compensating controls
- Approval per SLA section
- 90-day review cycle
## 8. Policy Review
Review this policy:
- After any security incident involving exploited vulnerability
- When introducing new languages or frameworks
- Quarterly with engineering and security leadership
How to Customize It
Adjust SLAs based on your release cadence: If you deploy multiple times per day, 5 business days for Tier 1 may be too slow. Consider 24-48 hours. If you release monthly, extend timelines proportionally.
Define entry points for your architecture: The template assumes HTTP-based services. If you build CLI tools, batch processors, or embedded systems, redefine entry points accordingly. For example, a batch processor's entry points might be S3 bucket notifications or scheduled triggers.
Set build policies to match your risk tolerance: The template blocks merges for Tier 1 findings. If your team isn't ready for hard blocks, start with warnings and weekly reports to leadership. Escalate to blocks after your team adjusts to reachability-based triage.
Adapt AI code provisions to your tooling: If your team uses GitHub Copilot, Cursor, or other AI assistants, add specific rules about what types of code require human review. Consider requiring review for any AI-generated code that touches your entry point inventory.
Integrate with your existing tools: The policy references "reachability analysis" generically. Replace this with your specific tool—whether that's a commercial CPG analyzer or a custom dataflow analysis script.
Validation Steps
After implementing this policy:
Run a baseline scan: Execute your security scanners and reachability analysis on your main branch. Classify all findings according to the tier system. This is your starting point.
Test the PR workflow: Create a test PR that introduces a Tier 1 vulnerability (e.g., SQL injection in a route handler). Verify that your pipeline blocks the merge and provides clear remediation guidance.
Validate tier classification: Manually review a sample of Tier 3 findings to confirm they're truly unreachable. If you find reachable code classified as Tier 3, your entry point inventory is incomplete.
Measure developer experience: Survey your engineering team after 30 days. Ask: "Do security findings now feel more relevant?" and "How much time do you spend on security findings per week?" Compare to baseline.
Track Tier 1 escapes: If a Tier 1 finding reaches production, conduct a post-incident review. Did the finding exist before the policy? Was it misclassified? Did someone approve an exception? Use this data to refine your process.
This policy won't eliminate all security findings. It will focus your team's limited time on vulnerabilities that actually threaten your application—the ones attackers can reach and exploit. Start with your highest-risk applications, refine based on what you learn, and expand from there.



