Overview of the Vulnerability
Noma Security has disclosed a vulnerability in Grafana's AI-powered dashboards, allowing attackers to exfiltrate sensitive data through indirect prompt injection. This flaw, named GrafanaGhost, exploits a weakness in URL validation by using protocol-relative URLs to bypass security controls. Although Grafana Labs has implemented a fix, the incident has sparked debate about its severity. Noma Security demonstrated data exfiltration without user authentication, while Grafana Labs' CISO Joe McManus argues that significant user interaction is required for exploitation.
The core issue lies in Grafana's AI features, which process dashboard content that can contain attacker-controlled prompts, manipulating the LLM into making unauthorized external requests.
Timeline of Events
The sequence of events follows standard responsible disclosure:
- Noma Security identified the vulnerability during research into AI-enabled enterprise platforms.
- The team disclosed the flaw to Grafana Labs.
- Grafana Labs developed and deployed a fix.
- Public disclosure occurred after remediation.
- Debate emerged between researchers and the vendor about practical exploitability.
Failed or Missing Controls
Input Validation on AI-Processed Content
Grafana's AI dashboard feature processed user-supplied content without adequately filtering embedded prompts. This created an indirect prompt injection vector.
URL Validation Bypass
The vulnerability exploited Grafana's URL validation using protocol-relative URLs (starting with //). The validation logic failed to recognize these as external URLs, but the underlying HTTP client did.
Lack of Egress Controls
The AI system could initiate outbound connections to attacker-controlled domains based on prompt instructions. No allowlist or network segmentation prevented the LLM from making arbitrary external requests during dashboard rendering.
Missing Authentication Requirements
Noma Security's findings suggest the attack could exfiltrate data without user authentication, indicating unauthenticated endpoints or session handling issues in the AI pipeline.
Relevant Standards
OWASP ASVS v4.0.3
Requirement 5.2.1 mandates that applications sanitize, disable, or sandbox user-supplied content. This principle applies to LLM prompts—treat all user-supplied content as potentially malicious, even when processed by AI.
PCI DSS v4.0.1 Requirement 6.4.3
If your Grafana dashboards display payment card data, ensure script execution is explicitly authorized. This control extends to any feature executing code or making network requests based on user input.
NIST 800-53 Rev 5 SI-10
SI-10 requires organizations to validate information inputs. For AI systems, validate both direct user input and the data your LLM processes.
ISO 27001 Annex A.8.22
Control 8.22 addresses network segregation and filtering. AI services should operate in a network segment with strict egress filtering.
Action Items for Your Team
Map Your AI Attack Surface
Identify every feature where an LLM processes user-supplied or influenced content. Document data sources, actions the LLM can trigger, and network requests the LLM can initiate.
Implement Prompt Injection Defenses
Add a validation layer before LLM processing. Strip or escape prompt control characters from user content. Consider using a separate, restricted LLM instance for processing untrusted data.
Fix Protocol-Relative URL Parsing
Review your URL validation logic. Treat protocol-relative URLs as external references requiring the same validation as absolute URLs.
Segment AI Service Network Access
Implement egress filtering for your LLM services. Create an allowlist of legitimate external APIs required by your AI features. Block all other outbound connections.
Test AI Features with Adversarial Prompts
Include prompt injection scenarios in your security testing. Test cases should cover embedded instructions for external requests, attempts to override system prompts, data exfiltration through error messages, and protocol-relative URLs.
The GrafanaGhost vulnerability highlights the emerging security challenges posed by AI integration in enterprise software, especially in data visualization platforms. Your team must treat every AI feature as a new execution context, requiring robust validation and security controls.



