Skip to main content
Dependency Management When AI Writes Half Your CodeStandards
5 min readFor Compliance Teams

Dependency Management When AI Writes Half Your Code

Your AI coding assistant just suggested a library. Your developer accepted it. Your codebase now includes a new dependency with 47 transitive dependencies, three of which have known CVEs. This happened in 12 seconds.

Traditional dependency review processes—where a senior engineer evaluates each new package during code review—can't keep up when AI tools recommend libraries at the speed of autocomplete. This checklist helps you implement controls that operate at the pace of AI-driven development.

What This Checklist Covers

This checklist addresses dependency management controls for environments where AI coding assistants influence library selection. It focuses on automated gates, visibility tools, and policy enforcement that work without slowing down AI-accelerated development cycles. Each item maps to specific requirements in PCI DSS v4.0.1 (Requirement 6.3.2 for secure development practices) and ISO 27001 (A.8.31 for separation of development, test, and production environments).

Prerequisites

Before implementing this checklist:

  • Identify which AI coding tools your developers use (GitHub Copilot, Cursor, Amazon CodeWhisperer, etc.)
  • Generate a complete dependency graph for your applications
  • Maintain a centralized artifact repository or registry (Artifactory, Nexus, or similar)
  • Ensure your CI/CD pipeline can fail builds based on policy violations

Checklist Items

1. Establish an Approved Dependency Allowlist

Action: Create and maintain a list of pre-vetted libraries that AI tools and developers can use without additional review.

Requirement reference: PCI DSS v4.0.1 Requirement 6.3.2 requires that security is built into the SDLC, including review of custom code prior to release.

What good looks like: Your allowlist includes version ranges, is updated monthly, and covers 80%+ of your team's common use cases. When an AI assistant suggests a library on the allowlist, developers can accept it immediately. When it suggests something else, they see a clear prompt to request a security review.

2. Implement Automated Vulnerability Scanning at Commit Time

Action: Configure pre-commit hooks or IDE plugins that scan AI-suggested dependencies for known vulnerabilities before code reaches your repository.

Requirement reference: OWASP ASVS v4.0.3 requirement V1.14.2 states that all components should be identified and have a known security status.

What good looks like: A developer accepts an AI suggestion that includes a vulnerable library. Within 3 seconds, they see an inline warning with the CVE number, severity score, and a link to a safe alternative. The commit is blocked until they choose a different version or request an exception.

3. Generate and Maintain Software Bills of Materials (SBOMs)

Action: Automatically generate SBOMs in CycloneDX or SPDX format for every build, capturing all direct and transitive dependencies.

Requirement reference: NIST CSF function ID.AM-2 requires inventory of software platforms and applications.

What good looks like: Every production deployment includes an SBOM artifact stored alongside the container image or deployment package. Your security team can query all SBOMs to find which applications use a specific library within 5 minutes of a new CVE disclosure. SBOMs include license information to flag GPL or other restrictive licenses before they reach production.

4. Create AI Bill of Materials (AIBOMs) for AI Model Dependencies

Action: Document which AI models, training data sources, and ML libraries your applications depend on, including version information and update schedules.

Requirement reference: ISO/IEC 27001:2022 control A.5.23 requires information security for use of cloud services, which extends to AI service dependencies.

What good looks like: You maintain a structured inventory showing that your application uses OpenAI GPT-4 via API (with model version), includes TensorFlow 2.15.0 for local inference, and depends on three internal ML models last updated in Q4 2024. When a model provider announces a deprecation, you know exactly which services are affected.

5. Set Dependency Age and Maintenance Thresholds

Action: Define and enforce policies that reject dependencies that haven't been updated in a specified timeframe or lack active maintenance.

Requirement reference: PCI DSS v4.0.1 Requirement 6.3.3 requires that development personnel are trained in secure coding techniques.

What good looks like: Your policy automatically flags any library with no commits in 18 months or no releases in 24 months. When an AI assistant suggests an abandoned package, the developer sees a warning before accepting. Your dashboard shows the percentage of dependencies meeting freshness criteria, and you review exceptions quarterly.

6. Implement License Compliance Checks

Action: Scan all AI-suggested dependencies for license types and block those incompatible with your application's license or business model.

What good looks like: A developer accepts an AI suggestion that includes a GPL-licensed library in your proprietary SaaS application. The commit fails immediately with a clear explanation of the license conflict and suggestions for permissively-licensed alternatives. Your legal team receives a weekly report of all license exceptions approved that week.

7. Configure Dependency Update Automation with Safety Rails

Action: Use Dependabot, Renovate, or similar tools to automatically create pull requests for dependency updates, with automated testing and rollback procedures.

Requirement reference: NIST 800-53 Rev 5 control SI-2 requires flaw remediation, including software updates.

What good looks like: Your automation creates PRs for minor and patch updates daily. Each PR triggers your full test suite. If tests pass and the update is on your allowlist, it merges automatically. Major version updates require manual review. You patch critical vulnerabilities in production within 24 hours of disclosure.

8. Monitor AI Assistant Behavior and Suggestions

Action: Log which dependencies AI tools suggest most frequently and track acceptance rates to identify patterns that need policy updates.

What good looks like: Your monthly report shows that AI assistants suggested 847 dependencies last month, developers accepted 623, and 89 were blocked by policy. You identify that a specific outdated library is suggested 40+ times monthly and work with your AI tool provider to update their training data or add it to your allowlist with a modern alternative.

Common Mistakes

Blocking all AI suggestions by default. This defeats the productivity gains and frustrates developers. Instead, create a fast path for approved dependencies and a clear process for requesting reviews of new ones.

Treating SBOMs as compliance theater. Generating SBOMs but never querying them provides no security value. Build queries into your incident response playbook and vulnerability management process.

Ignoring transitive dependencies. AI assistants typically suggest top-level libraries, but the risk often hides in dependencies-of-dependencies. Your scanning must traverse the entire dependency tree.

Setting policies without escape hatches. Sometimes you legitimately need an older library or one that fails automated checks. Document your exception process clearly and review exceptions regularly to prevent policy erosion.

Next Steps

Start with items 1, 2, and 3 from this checklist—they provide immediate risk reduction without requiring new tools. Implement item 7 within 90 days to handle the increased volume of dependencies that AI-accelerated development introduces. Review your dependency policies quarterly as AI coding assistants evolve and their suggestion patterns change.

Your goal isn't to slow down AI-assisted development. It's to ensure that security and compliance controls operate at the same speed as the code generation.

Topics:Standards

You Might Also Like