The belief that documenting AI components will control AI risk is widespread. CISA and the G7 have released guidance for AI software bills of materials (SBOMs), and the compliance world is treating it as the long-awaited solution. Catalog your models, list your datasets, track your dependencies—suddenly AI systems become manageable, auditable, and clear.
This is magical thinking.
Why Documentation Alone Falls Short
The idea behind AI SBOMs is that comprehensive documentation leads to effective risk management. This works for traditional software because code behaves predictably. The same input produces the same output. You can trace a vulnerability through your dependency tree, patch it, and verify the fix.
AI systems don't operate this way. A model trained on Dataset A and integrated with Component B doesn't just execute logic—it produces probabilistic outputs that shift based on inputs you can't fully predict. The SBOM tells you what went into the system. It doesn't tell you what the system will do.
CISA's guidance requires documentation of models, datasets, software components, providers, licenses, and dependencies. That's a start. But here's what it can't capture: how your customer service chatbot will respond to prompt injection, whether your fraud detection model exhibits bias against specific demographics, or what happens when your training data drifts six months after deployment.
You're documenting ingredients, not behavior.
The Evidence: Where SBOMs Break Down
Traditional SBOMs work because software supply chain risks are largely compositional. You inherit vulnerabilities from your dependencies. Fix the dependency, fix the vulnerability. The attack surface is mappable.
AI systems introduce non-compositional risks. Your model's behavior emerges from the interaction of components, not just their presence. Two organizations using identical models, datasets, and infrastructure can face completely different risks based on how they've configured prompts, set confidence thresholds, or integrated outputs into business logic.
Consider what you actually need to assess AI risk:
- Model behavior under adversarial conditions (not just model architecture)
- Data quality and representativeness (not just dataset lineage)
- Integration patterns and guardrails (not just component versions)
- Monitoring and feedback loops (not just deployment configurations)
An AI SBOM gives you visibility into the first part of each pair. The second part—the part that determines actual risk—requires runtime analysis, testing, and ongoing monitoring. Documentation can't substitute for observation.
What to Do Instead
Start with the SBOM. It's necessary, just not sufficient. You need to know what's in your AI systems before you can assess their risks. But don't stop there.
Build behavioral baselines. For each AI system, document expected outputs for representative inputs. Test edge cases. When you update a model or dataset, compare new behavior against your baseline. The SBOM tells you what changed in composition. Behavioral testing tells you what changed in practice.
Implement runtime guardrails. Your SBOM lists the model. Your guardrails constrain what the model can do. Input validation, output filtering, confidence thresholds, human-in-the-loop requirements—these controls operate independently of the SBOM but depend on understanding what's documented there.
Treat AI SBOMs as living documents. Traditional SBOMs update when you change dependencies. AI SBOMs need updates when model behavior changes, even if components stay the same. Model drift, data drift, and performance degradation aren't captured in static documentation. Build processes to refresh your AI SBOMs based on monitoring signals, not just deployment events.
Integrate with vendor risk management. When evaluating AI vendors, request their AI SBOMs—but also ask about their testing methodology, monitoring capabilities, and incident response procedures. The SBOM tells you what they're using. The operational questions tell you whether they can manage it.
Focus on high-risk systems first. Not every AI system needs the same scrutiny. Prioritize AI SBOMs for systems that make consequential decisions, handle sensitive data, or operate in regulated domains. A recommendation engine and a credit decisioning model don't carry the same risk profile.
When the Conventional Wisdom IS Right
AI SBOMs absolutely matter for specific compliance and security scenarios.
If you're subject to software supply chain regulations, AI SBOMs extend your existing compliance obligations to AI systems. You can't claim comprehensive software inventory management while excluding AI components.
For incident response, AI SBOMs are invaluable. When a vulnerability surfaces in a widely-used ML library or a dataset is discovered to contain poisoned samples, you need to know immediately which systems are affected. That requires the kind of component-level visibility that SBOMs provide.
For procurement, AI SBOMs give you an advantage. Vendors who can't articulate what models they're using, where training data came from, or how they manage dependencies aren't ready for enterprise deployment. Requiring AI SBOMs raises the bar.
And for teams just starting to grapple with AI governance, the exercise of creating an AI SBOM forces crucial conversations. What AI systems do we actually have? Who's responsible for them? What data are they using? These questions need answers regardless of whether the SBOM itself solves your risk problems.
The mistake is treating documentation as risk management. AI SBOMs are a prerequisite for managing AI risk, not a solution to it. Build the inventory, then build the controls that actually constrain behavior. Your compliance framework needs both.



