Skip to main content
AI Writes Your Code. Who Manages Its Secrets?Standards
5 min readFor Compliance Teams

AI Writes Your Code. Who Manages Its Secrets?

Your development team just adopted an AI coding assistant. Within weeks, your commit velocity doubled. Your security team celebrated the reduced vulnerability count in static scans. Then you discovered 47 API keys in your Git history—31 of them still active.

This checklist addresses the security gap that AI-driven development creates: the explosion of secrets, credentials, and non-human identities that AI systems generate, consume, and inadvertently expose. While tools like Anthropic's Claude Code Security focus on scanning AI-generated code for vulnerabilities, the real compliance risk lies in managing the authentication credentials that both enable and threaten your AI-assisted development pipeline.

What This Checklist Covers

You'll verify that your secrets management program can handle the scale and velocity of AI-driven development. This includes credential lifecycle management, detection of exposed secrets, and controls around non-human identities. Each item maps to specific compliance requirements where applicable.

Prerequisites

Before starting this checklist:

  • Inventory your AI development tools: Document which systems generate code, access repositories, or interact with production systems.
  • Identify your secrets detection tooling: Know whether you're scanning commits, pull requests, or just production deployments.
  • Map your non-human identity landscape: List service accounts, API keys, and machine credentials that AI systems use.

Checklist Items

1. Secrets Scanning Coverage

Done when: You scan 100% of commits before they reach your main branch, with automated blocking of commits containing high-confidence secrets.

Your CI/CD pipeline must intercept secrets before they enter version control. Configure pre-commit hooks or CI checks that scan at the speed your team commits. If AI-generated commits grew by a factor of 10 over the past year in the broader ecosystem, assume your team's velocity will follow a similar trajectory.

What good looks like: A developer commits code with an accidentally embedded AWS key. The commit is rejected within 5 seconds with a specific error message identifying the secret type and line number. The key is immediately flagged for rotation.

Compliance note: PCI DSS v4.0.1 Requirement 6.3.2 requires that security vulnerabilities are identified and addressed. Hardcoded secrets qualify as vulnerabilities.

2. Historical Repository Scanning

Done when: You've scanned the entire Git history of every repository (including branches and forks) and remediated all discovered secrets.

AI tools learn from your existing codebase. If secrets exist in your history, AI assistants may pattern-match and reproduce similar structures. Scan your full Git history, not just recent commits.

What good looks like: You run a full-history scan across 200 repositories. You find 83 secrets spanning three years. Each is categorized by type, risk level, and current validity. All active secrets are rotated within 24 hours. Dead secrets are documented but left in history (since rewriting history creates its own risks).

Compliance note: SOC 2 Type II CC6.1 requires logical access controls. Exposed credentials undermine those controls regardless of when they were committed.

3. Non-Human Identity Inventory

Done when: You maintain a current inventory of all service accounts, API keys, OAuth tokens, and machine credentials, including which systems created them and their last use timestamp.

AI systems operate through non-human identities. Unlike human accounts, these credentials don't retire when someone leaves. Track every machine credential with the same rigor you apply to employee accounts.

What good looks like: Your inventory shows 347 non-human identities. Each entry includes: creation date, creating system, purpose, last authentication, scope/permissions, and rotation schedule. You can answer "which credentials does our AI coding assistant use?" in under 60 seconds.

Compliance note: NIST Cybersecurity Framework v2.0 function PR.AC-1 requires management of identities and credentials. This explicitly includes non-human identities.

4. Credential Rotation Policy

Done when: Every non-human identity has a documented maximum lifetime, automated rotation where possible, and an exception process for credentials that cannot be automatically rotated.

Set maximum lifetimes based on risk: 90 days for production access, 30 days for credentials with write permissions, 7 days for temporary development keys. AI systems should never use long-lived credentials when short-lived tokens are available.

What good looks like: Your CI/CD system generates ephemeral credentials for each build that expire after 2 hours. Your AI coding assistant uses OAuth tokens that refresh every 24 hours. The 12 credentials that require manual rotation (legacy systems) are tracked in a ticketing system with automated reminders 7 days before expiration.

5. Secrets Detection Performance

Done when: Your scanning infrastructure can process commits at or above the rate your team generates them, with latency under 10 seconds per scan.

If your scanning can't keep pace with AI-assisted development velocity, you'll create a backlog. GitGuardian can scan at 50 MB/s per core—benchmark your tools against this capability. Slow scans train developers to bypass security checks.

What good looks like: Your team averages 400 commits per day. Your scanning infrastructure processes each commit in 3-7 seconds with no queue buildup during peak hours. Your 95th percentile scan time is under 10 seconds.

6. AI System Credential Segregation

Done when: Credentials used by AI development tools are segregated from production credentials, with separate rotation schedules and monitoring.

Your AI coding assistant should never authenticate with the same credentials your production systems use. Create dedicated service accounts with minimal necessary permissions.

What good looks like: Your AI tools authenticate using dedicated service accounts tagged "ai-development" in your identity provider. These accounts cannot access production databases or deployment pipelines. Monitoring alerts fire if these accounts attempt to access production resources.

Compliance note: ISO/IEC 27001 Control 8.2 requires segregation of privileged access rights. AI systems require segregated credentials.

7. Secrets in AI Training Data

Done when: You've verified that codebases used to train or fine-tune AI models contain no active credentials, and you have a process to sanitize code before it's used for model training.

If you're fine-tuning models on your codebase, those training datasets become a secrets exposure vector. Scan and sanitize before training.

What good looks like: Before uploading code samples to fine-tune your AI assistant, you run them through your secrets scanner. You maintain a sanitized training corpus that's rescanned quarterly. Your AI vendor agreement prohibits them from training on unsanitized code.

Common Mistakes

Scanning only new code: Your AI assistant learns from your entire codebase. Historical secrets remain exploitable.

Treating all secrets equally: A hardcoded database password for production is not the same risk as an expired test API key. Prioritize remediation based on credential scope and validity.

Ignoring AI-generated credential patterns: AI tools sometimes generate placeholder credentials that look real enough to trigger false positives—or worse, generate credentials that match valid patterns. Configure your scanner to catch both.

Manual rotation for high-velocity credentials: If your AI systems generate or consume dozens of credentials daily, manual rotation doesn't scale. Automate or redesign.

Next Steps

Start with items 1, 2, and 3. You need visibility before you can enforce policy. Run your historical scan during low-activity hours—it's CPU-intensive.

After establishing baseline coverage, focus on item 6. Segregating AI credentials prevents a compromised development tool from becoming a production breach.

Schedule quarterly reviews of your non-human identity inventory. AI adoption accelerates. Your secrets management program must accelerate with it.

GitGuardian

Topics:Standards

You Might Also Like