Skip to main content
Category: Vulnerability Management

Coordinated Vulnerability Disclosure

Also known as: CVD, Responsible Disclosure, Coordinated Disclosure
Simply put

Coordinated Vulnerability Disclosure (CVD) is a process in which a security vulnerability is reported privately to the affected vendor or responsible party before any public disclosure is made. This coordination gives the vendor time to investigate and develop a fix, reducing the window during which attackers could exploit the vulnerability. The vulnerability is then disclosed publicly after mitigations are available or a reasonable remediation period has elapsed.

Formal definition

CVD is a structured process for gathering information from vulnerability finders, coordinating the sharing of that information among relevant stakeholders (including vendors, affected parties, and coordinating bodies), and managing the timing of public disclosure. The core objective is to reduce adversary advantage during the period between vulnerability discovery and mitigation availability. The process typically involves a disclosure timeline negotiated between the finder and the vendor, during which the vendor develops and releases a patch or mitigation before details become publicly available. CVD may involve a third-party coordinator (such as a CERT or CSIRT) when direct communication between finder and vendor is impractical or when multiple vendors are affected. It is distinct from full immediate disclosure and from non-disclosure, representing a middle-ground model that attempts to balance transparency with user protection.

Why it matters

When a security vulnerability is discovered, the window between discovery and remediation represents a period of heightened risk for users and organizations. If vulnerability details are disclosed publicly before a patch is available, attackers can rapidly develop exploits and target unprotected systems at scale. CVD addresses this risk by giving vendors a structured opportunity to investigate, develop, and distribute mitigations before technical details become widely known, reducing adversary advantage during that critical period.

Who it's relevant to

Security Researchers and Vulnerability Finders
Independent researchers and penetration testers who discover vulnerabilities in software or systems need to understand CVD processes to report findings responsibly. Following a CVD process helps researchers avoid legal risk, maintain professional credibility, and ensure their work results in actual user protection rather than enabling exploitation.
Software Vendors and Product Teams
Vendors who receive vulnerability reports need clearly defined CVD policies and internal processes to triage, investigate, and remediate findings within agreed timelines. Establishing a published vulnerability disclosure policy and a dedicated intake channel (such as a security contact or bug bounty program) is a prerequisite for participating effectively in CVD.
CERT and CSIRT Coordinators
Coordinating bodies such as national CERTs and CSIRTs play a central role in CVD when direct communication between finders and vendors is impractical, or when a vulnerability affects multiple vendors simultaneously. These organizations facilitate communication, help negotiate timelines, and may manage multi-party disclosure processes to ensure all affected parties can prepare mitigations before public disclosure.
Application Security and DevSecOps Teams
Internal security teams at organizations that produce software are responsible for operationalizing CVD by maintaining processes to receive, validate, and act on external vulnerability reports. They must coordinate with development, legal, and communications stakeholders to manage remediation timelines and ensure that patches are shipped before or alongside any public disclosure.
Policy and Compliance Professionals
Regulatory bodies, including those in the European Union through ENISA, increasingly treat CVD as a critical mechanism for protecting users. Legal and compliance teams need to understand CVD frameworks to ensure their organizations meet any applicable requirements around vulnerability handling and to draft disclosure policies that provide clear guidance to external reporters.

Inside CVD

Vulnerability Reporter
The individual or organization (typically a security researcher, penetration tester, or bug bounty participant) who discovers and reports the vulnerability to the affected vendor or coordinator.
Affected Vendor or Maintainer
The party responsible for the software, system, or component containing the vulnerability, who receives the report and is expected to develop and release a remediation within an agreed timeframe.
Disclosure Deadline
A defined timeframe, commonly 45 to 90 days, within which the vendor is expected to produce a fix before the reporter publicly releases details of the vulnerability. The deadline creates accountability and prevents indefinite suppression of vulnerability information.
Coordinating Body
An optional neutral third party, such as CERT/CC or a national CSIRT, that facilitates communication between the reporter and vendor, particularly when direct contact is difficult or when multiple vendors are affected.
Remediation Period
The time between initial private notification and public disclosure, during which the vendor develops, tests, and deploys a fix. The length may be adjusted by mutual agreement based on complexity of the vulnerability.
Public Disclosure
The release of vulnerability details to the broader community, typically including a CVE identifier, severity scoring, technical description, and available mitigations or patches, timed to coincide with or follow the availability of a fix.
CVE Assignment
The process of obtaining a Common Vulnerabilities and Exposures identifier for the reported vulnerability, enabling consistent tracking and reference across advisories, patch notes, and security tooling.
Security Advisory
A formal document published by the vendor or coordinator at the time of public disclosure, communicating the nature of the vulnerability, affected versions, severity, and remediation steps to affected users.

Common questions

Answers to the questions practitioners most commonly ask about CVD.

Is coordinated vulnerability disclosure the same as responsible disclosure?
These terms are often used interchangeably, but they are not identical. Responsible disclosure is an older, informal term that typically implied a researcher-driven process with no formal obligations on either party. Coordinated vulnerability disclosure is a more structured framework, typically defined by standards such as ISO/IEC 29147, that establishes roles, timelines, and expectations for both the reporting party and the vendor or coordinator. CVD emphasizes mutual coordination and defined processes rather than relying on good faith alone.
Does coordinated vulnerability disclosure require the researcher to give the vendor unlimited time to fix the issue before going public?
No. CVD does not require indefinite confidentiality. Most CVD frameworks define a disclosure deadline, commonly 90 days, after which the researcher may publish details regardless of whether a patch is available. This deadline structure is intentional: it creates accountability for vendors to respond and remediate within a reasonable timeframe. Extensions may be negotiated, but the researcher is not obligated to withhold information indefinitely if the vendor is unresponsive or remediation is delayed without justification.
How should an organization set up a channel for receiving vulnerability reports under a CVD process?
Organizations should establish a dedicated, publicized intake mechanism, typically a security contact email such as security@[domain], a web form, or a bug bounty platform. The contact information should be listed in a security.txt file at the organization's web root, in DNS records, and in any relevant documentation. The intake channel should acknowledge receipt promptly, ideally within a defined SLA such as five business days, and should support encrypted communication for sensitive submissions.
What timelines should an organization define in its CVD policy?
A CVD policy should define at minimum: the maximum time to acknowledge receipt of a report, the maximum time to validate and triage the report, the target remediation timeframe, and the expected disclosure date. Common benchmarks include acknowledgment within 7 days, triage within 14 days, and remediation or mitigation within 90 days. The policy should also specify how the organization handles requests for timeline extensions and what happens if remediation cannot be completed before the disclosure deadline.
How should an organization coordinate with third parties such as upstream vendors or downstream customers during a CVD process?
When a vulnerability affects components from an upstream vendor, the organization should notify that vendor as early as possible and align disclosure timelines. Coordination bodies such as CERT/CC or national CSIRTs can facilitate multi-party disclosure when multiple vendors are affected. For downstream customers or users, the organization should plan a notification strategy that gives them enough time to apply patches or mitigations before full public disclosure. The CVD policy should document escalation paths and coordination contacts for these scenarios.
What should an organization include in a public disclosure advisory to satisfy CVD obligations?
A public disclosure advisory should typically include a unique identifier such as a CVE number, a description of the vulnerability and its potential impact, affected versions or configurations, the remediation or mitigation available, credit to the reporting researcher if they consent, and a timeline summary of the disclosure process. The advisory should be published in a consistent, findable location such as a security advisories page. Organizations should avoid disclosing exploitation details that could meaningfully assist attackers before patches are widely deployed.

Common misconceptions

Coordinated Vulnerability Disclosure requires the reporter to keep the vulnerability secret indefinitely if the vendor requests more time.
CVD involves a defined disclosure deadline that creates a bounded obligation on the reporter. If the vendor fails to remediate within the agreed timeframe, the reporter typically retains the right to proceed with public disclosure, preventing indefinite suppression of vulnerability information.
CVD and bug bounty programs are the same thing.
Bug bounty programs are a specific, often commercially structured mechanism for soliciting vulnerability reports, usually involving monetary rewards. CVD is a broader process framework for handling vulnerability reports responsibly and may occur entirely outside any bounty program context.
Publicly disclosing a vulnerability before a patch is available always constitutes irresponsible or unethical behavior by the reporter.
When a vendor is unresponsive, denies the issue, or fails to remediate within a reasonable and communicated deadline, public disclosure may be appropriate and consistent with established CVD norms. Some CVD frameworks explicitly permit disclosure under these conditions to protect the broader user community.

Best practices

Establish and publish a clear vulnerability disclosure policy before receiving reports, specifying preferred contact channels, expected response timelines, and the scope of systems covered, so reporters know how to engage and what to expect.
Acknowledge receipt of a vulnerability report promptly, typically within 5 business days, and provide the reporter with a tracking reference and an initial assessment timeline to maintain trust and reduce the likelihood of premature public disclosure.
Agree on a disclosure deadline with the reporter at the outset of the process, and communicate proactively if remediation requires timeline adjustments, rather than allowing deadlines to lapse without notice.
Request a CVE identifier early in the remediation process so that downstream users, package managers, and security tooling can reference the vulnerability consistently when the advisory is published.
Coordinate the timing of public disclosure so that the security advisory and remediation are available simultaneously, giving affected users actionable guidance at the moment vulnerability details become public.
Maintain a documented internal process for triaging, escalating, and tracking reported vulnerabilities, including defined ownership roles, so that reports do not stall due to unclear internal responsibilities.