Skip to main content
82% of Organizations Can't Name Their AI AssetsStandards
5 min readFor Compliance Teams

82% of Organizations Can't Name Their AI Assets

Your developers are using AI to write code. Your security team scans containers and dependencies. But when someone asks, "What AI models are running in production?" — silence.

Here's what the data shows: one in three organizations report that over 60% of their code is now AI-generated. Yet only 18% have any AI governance in place. This isn't a future problem. It's a current inventory gap that creates compliance exposure and security risk you can't measure.

The Expanding Software Supply Chain

The software supply chain has expanded significantly. Your dependency tree used to include open-source libraries, container images, and third-party APIs. Now add foundation models, fine-tuned variants, prompt templates, vector databases, agent frameworks, and protocols like MCP (Model Context Protocol). Each is an asset that needs tracking, but none appear in your SBOM.

This situation mirrors the open-source governance crisis from 2017-2019, when teams realized they couldn't answer basic questions like "Are we using Log4j?" The difference: AI assets change faster, have less standardized packaging, and carry different risk profiles. A compromised model can leak training data. An ungoverned agent can access systems outside your security perimeter.

Key Findings

Lack of Ownership. In most organizations, no single team owns the AI asset inventory. Security teams track vulnerabilities in code dependencies. Data teams manage training datasets. DevOps handles deployment infrastructure. The AI components themselves — models, agents, retrieval systems — fall into the gap between these responsibilities. This creates a situation where everyone assumes someone else is tracking it.

Inadequate Tools. Your current SBOM tools catalog packages from npm, PyPI, and Maven. They don't capture model checkpoints downloaded from Hugging Face, API keys for OpenAI endpoints, or custom agent configurations. When you run a supply chain security scan, it misses the AI layer entirely. This isn't a tool limitation — it's a category mismatch.

Emerging Compliance Requirements. EU AI-related requirements now expect organizations to document high-risk AI systems. ISO 27001 controls around asset management (Annex A 5.9) apply to AI components just as they do to traditional software. SOC 2 auditors are beginning to ask about AI system controls during CC6.1 (logical access) reviews. You can't demonstrate control over assets you haven't inventoried.

Compounding Risk. Unlike a static library dependency, AI assets evolve. Your team fine-tunes a model, changes a prompt template, or updates retrieval logic. Each change creates a new version that needs tracking. Without version control for AI components, you can't reproduce builds, roll back problematic deployments, or investigate incidents. The "what changed?" question becomes unanswerable.

Shadow AI Proliferation. Developers are integrating AI capabilities without going through procurement or security review. They're using free API tiers, downloading models directly from repositories, and building agent workflows that touch production data. This isn't malicious — it's pragmatic. But it means your actual AI footprint is larger than your documented one, possibly by an order of magnitude.

Implications for Your Team

You're carrying risk you can't quantify. When your next SOC 2 audit asks about AI system controls, you'll need documentation you don't have. When a model vulnerability is disclosed, you won't know if you're affected. When compliance asks for an AI inventory to meet new regulatory requirements, you'll be building it under deadline pressure.

The technical debt is real. Every AI component deployed without documentation is a future incident investigation that will take twice as long. Every model running without version tracking is a reproducibility problem waiting to happen. Every agent with undefined access scope is a privilege escalation risk.

Your existing governance processes don't extend to AI. Change management doesn't capture model updates. Access control doesn't cover agent permissions. Vulnerability management doesn't scan model dependencies. You need parallel processes, and you need them before your next audit cycle.

Action Plan

Immediate (this quarter): Assign ownership for the AI asset inventory. This can't be a working group — it needs a single accountable owner. Security engineering is the logical choice, since they already own the software supply chain inventory. Give them budget for tools and authority to require registration of AI components before production deployment.

30 days: Survey your current state. Build a spreadsheet with five columns: AI component type, location (repo/service), purpose, owner, and last review date. Start with known uses — the LLM API calls in your codebase, the recommendation models in production, the chatbot on your website. This won't be complete, but it establishes a baseline. Document what you find in a format your auditors will accept.

60 days: Extend your SBOM process to include AI components. Add fields for model name, version, source (Hugging Face, OpenAI, internal), training data provenance if available, and inference endpoint. Integrate this into your CI/CD pipeline so new AI components can't reach production without documentation. This doesn't require new tools — a structured YAML file in each repository works as a starting point.

90 days: Define AI-specific security controls. Adapt OWASP ASVS v4.0.3 requirements for your AI components. For example, V1.14 (configuration) should cover model hyperparameters. V4.1 (access control) should cover agent permissions. V14.2 (dependencies) should cover model supply chain integrity. Map these to your existing control framework so auditors see continuity, not a separate AI security program.

This quarter: Implement detective controls for shadow AI. Scan your code repositories for common AI library imports (transformers, langchain, openai). Review cloud spend for inference API charges that weren't approved. Check egress logs for traffic to model hosting endpoints. When you find undocumented AI usage, don't punish it — document it and bring it into governance.

Next quarter: Build a model registry that integrates with your deployment pipeline. This can be as simple as an internal service that requires teams to register model metadata before deploying. Include fields for: model card (purpose, limitations, training data), security review status, compliance classification (does it process PII, make decisions about people, etc.), and approval chain. Make this a required gate in your deployment process.

Ongoing: Establish a review cadence for AI assets. Quarterly is reasonable for models in production. Include questions like: Is this model still needed? Has the risk profile changed? Are there newer versions with security fixes? Is the training data still compliant with current regulations? Treat this like your existing dependency review process — it's the same discipline applied to a different asset class.

EU AI-related requirements

Topics:Standards

You Might Also Like