Skip to main content
MCP and Agentic AI: 5 Myths Security Teams Need to AbandonGeneral
6 min readFor Security Engineers

MCP and Agentic AI: 5 Myths Security Teams Need to Abandon

When the Linux Foundation adopted the Model Context Protocol (MCP) under the newly formed Agentic AI Foundation in late 2025, your security channels likely buzzed with speculation. Some teams dismissed it as another AI hype cycle, while others feared immediate changes to their security posture.

These myths persist because agentic AI represents genuinely new territory—systems that don't just respond to prompts but take autonomous action across your infrastructure. When foundational shifts happen at the Linux Foundation level, the gap between announcement and practical implication creates space for misconceptions. Let's clear them up.

Myth 1: "This is just another AI protocol—it won't affect our security architecture"

Reality: MCP changes how AI systems access your data and services, directly impacting your attack surface.

Unlike previous AI integrations that operated within sandboxed environments, MCP enables AI agents to interact with multiple data sources and tools simultaneously. This means an AI agent could theoretically query your database, call your API, and access your file system in a single workflow—exactly the kind of lateral movement you've spent years preventing.

Your threat model needs updating. When assessing a new MCP-enabled tool, you're not evaluating a single integration point but a potential hub connecting disparate systems. Apply the same scrutiny you'd give to a service mesh or API gateway: authentication at every hop, least-privilege access, and comprehensive logging of cross-system calls.

Start by inventorying which systems an agentic AI tool would need to access in your environment. Then map those connections against your existing segmentation strategy. If your network relies on perimeter-based controls, you have work to do.

Myth 2: "Open-source AI foundations mean we can trust the code by default"

Reality: Open source gives you visibility, not immunity.

The Linux Foundation's adoption of MCP, Goose, and AGENTS.md under the AAIF provides transparency into how these protocols work. That's valuable—but it doesn't eliminate your responsibility to verify what you're running.

Consider how you handle other open-source dependencies. You likely scan them for known vulnerabilities, review their supply chain, and monitor for unexpected behavior. The same discipline applies here, with an added layer: AI agents can exhibit emergent behavior that static code analysis won't catch.

Your verification process should include:

  • Dependency scanning for the agent runtime and all MCP server implementations you deploy
  • Review of what data sources each agent is configured to access
  • Testing of agent behavior under adversarial inputs (yes, you need to red-team your AI agents)
  • Monitoring for unexpected API calls or data access patterns in production

The open-source nature of these projects means you can audit the protocol implementation. Use that advantage—don't just assume someone else has.

Myth 3: "We can apply our existing API security controls and call it done"

Reality: Agentic AI introduces stateful, multi-step workflows that break traditional API security assumptions.

Your API gateway probably enforces rate limits, validates input schemas, and logs requests. That's necessary but insufficient for agentic systems.

Traditional APIs are stateless and predictable: client sends request, server sends response, transaction ends. Agentic AI systems maintain context across multiple interactions, make decisions based on accumulated state, and chain together operations you never explicitly authorized as a sequence.

This matters for compliance. PCI DSS v4.0.1 Requirement 6.4.3 mandates that you validate input to prevent injection attacks. But when an AI agent constructs database queries based on natural language interpretation of previous context, where exactly do you implement that validation? The initial user prompt? Each intermediate step? The final query construction?

You need new controls:

  • Workflow-level authorization: define what sequences of actions are permitted, not just individual API calls
  • Context inspection: log and review the decision chain that led to each action
  • Breakpoints for high-risk operations: require human approval before agents execute privileged commands, regardless of how they arrived at that decision

Myth 4: "Leadership changes at foundations don't impact our day-to-day security work"

Reality: Strategic direction shapes which security features get prioritized in the tools you'll use.

Jim Zemlin's recognition that he couldn't helm both the Linux Foundation and the AAIF forever wasn't just organizational housekeeping. It signaled that agentic AI requires dedicated strategic oversight—which means focused resource allocation and prioritization.

When Mazin Gilbert took on executive director responsibilities for the AAIF, that leadership brought specific technical priorities and industry relationships. Those priorities influence which security features get built into MCP implementations, which use cases get reference architectures, and which compliance frameworks get addressed first.

For your team, this means: engage early. The Linux Foundation model includes working groups and technical steering committees. If you wait until agentic AI tools are mature and widely deployed, you'll be retrofitting security controls instead of building them in.

Participate in the security working groups forming around these projects. Document your requirements—especially around audit logging, access control, and compliance mapping. The time to influence these protocols is now, while they're still establishing patterns.

Myth 5: "We should wait until agentic AI tools are 'production-ready' before planning for them"

Reality: Your developers are already experimenting with MCP-enabled tools, and your compliance scope is expanding whether you're ready or not.

If your organization does any software development, someone on your team has probably connected an AI coding assistant to your repositories. Many of these tools now support MCP, which means they can access your documentation, query your APIs, and interact with your development infrastructure.

This isn't theoretical. Your SOC 2 Type II scope now includes any system that processes customer data—and if an AI agent can read your customer database to answer support questions, that agent is in scope. Your auditor will ask how you control access, log activity, and ensure data retention policies apply.

The production-ready threshold is a moving target. Start building your governance framework now:

  • Maintain an inventory of AI tools in use across your organization (shadow AI is the new shadow IT)
  • Extend your data classification scheme to cover AI agent access
  • Update your incident response playbook to include "AI agent behaving unexpectedly" scenarios
  • Define clear policies for what systems agents can and cannot access

What to do instead

Stop treating agentic AI as a future problem. It's a current architecture decision.

Start with a pilot: choose one low-risk workflow where an MCP-enabled agent could provide value—maybe internal documentation search or development environment setup. Instrument it thoroughly. Log every data access, every API call, every decision point. Run it for a month and review what you learn.

Build your threat model explicitly: what happens if an agent is compromised? What if it misinterprets instructions? What if it chains together authorized actions in an unauthorized sequence? Write these scenarios down and design controls for each.

Update your compliance documentation now, before your auditor asks. Map how agentic AI tools fit into your existing control framework. Identify gaps. If you're subject to PCI DSS, understand how agent activity relates to Requirement 10.2.2 (logging of privileged actions). If you're pursuing ISO 27001 certification, consider how agents affect your information security risk assessment.

Most importantly: establish governance before deployment scales. Define who approves new agent capabilities, how you test them, and what triggers a security review. The Linux Foundation's adoption of MCP signals that agentic AI is moving from research project to infrastructure component. Your security program needs to move with it.

Topics:General

You Might Also Like