The Enterprise IT Security Guide to Claude and MCP
Eight out of ten Fortune 10 companies now use Claude. Over 300,000 businesses run it in production. And the Stacklok State of MCP in Software 2026 report found that 50% of organizations are actively experimenting with MCP servers, yet only 11% have reached production. That 39-point gap is not a technology problem. It is a governance problem.
IT and security teams are being asked to approve Claude and MCP deployments that developers have already started building, against a backdrop of documented vulnerabilities: 48% of MCP servers use insecure credential storage according to Trend Micro research, 53% rely on long-lived static credentials per Astrix, and two CVEs in the past twelve months demonstrated that cloning an untrusted repository with Claude Code is sufficient to trigger API key exfiltration before a trust dialog appears. Meanwhile, the Center for Internet Security published its MCP Companion Guide in April 2026, applying CIS Controls v8.1 to MCP-based systems and formally recognizing MCP as a new and distinct security boundary requiring policy, oversight, and operational discipline.
This guide is written for IT administrators and security professionals who are responsible for evaluating, approving, or governing Claude and MCP in their organizations. It covers the six control domains that determine whether a Claude and MCP deployment is enterprise-ready: secrets management, identity and access control, network isolation, approved server governance, audit logging, and policy enforcement. Each section answers the questions IT security teams are actually asking and maps to the controls that enterprise governance frameworks require.
How to Think About the Claude and MCP Attack Surface
Before applying controls, IT teams need an accurate mental model of what they are governing. Claude and MCP create a different threat surface than previous SaaS AI tools, for a specific reason: MCP servers are action-capable, not just context-aware.
A traditional SaaS AI integration reads data and produces text. An MCP-enabled Claude deployment can authenticate as a user, query a production database, create and merge pull requests, send emails, modify Confluence pages, and invoke cloud APIs, all autonomously, as part of a single agent workflow. The attack surface is not the AI model. It is every system the MCP servers are permitted to reach, with the permissions of whatever credentials those servers hold.
The CIS MCP Companion Guide characterizes this precisely: MCP expands identity, access control, logging, and application security surfaces by formalizing how AI systems discover and invoke privileged capabilities. IT security teams that treat Claude as a productivity tool with a chat interface are governing the wrong thing. The correct model treats each MCP server as a new privileged software component that can authenticate to, read from, and write to the systems it connects to.
There are four components to govern in a Claude and MCP deployment:
Claude itself — the AI model, accessed either via Claude.ai (SaaS), Claude Enterprise (Anthropic-managed), or the Claude API (your infrastructure). Each surface has different data handling properties, different controls available to IT administrators, and different attack surfaces.
MCP servers — software processes that connect Claude to specific external systems (databases, APIs, SaaS tools, internal services). Each MCP server holds credentials for the downstream system it connects to and can invoke tools against that system on behalf of the agent.
Agent Skills — structured Markdown files that encode procedural knowledge for Claude. Skills are a supply chain risk: an organization that allows users to import Skills from external sources has introduced an unvetted software dependency with potential for prompt injection and malicious instruction execution.
The MCP governance layer — the platform-level controls that determine which servers are approved, who can access which tools, how credentials are managed, and what is logged. In a naive deployment, this layer does not exist. In a production enterprise deployment, it must.
Domain 1: Secrets Management
The problem
Credentials are the highest-value target in any Claude and MCP deployment. An MCP server that connects Claude to Salesforce, GitHub, a production database, or an internal API necessarily holds authentication credentials for that system. The default pattern across most MCP documentation is to store those credentials in a JSON configuration file on the developer’s machine or in environment variables in the server’s process environment.
Trend Micro’s research found that approximately 48% of MCP servers recommend insecure storage methods, such as plaintext JSON config files, .env files, and hardcoded values in source code. GitGuardian found that over 12.8 million secrets were exposed in public GitHub repositories in a single year, a 28% year-over-year increase. The majority were API keys and cloud access credentials. When developers follow the examples in MCP documentation, the default outcome is credentials in locations that end up committed to version control.
What IT should require
No credentials in config files, source code, or environment variable files committed to version control. The Claude Desktop configuration file at ~/Library/Application Support/Claude/claude_desktop_config.json on macOS, and .mcp.json project files, are common locations where credentials appear. IT policy should require secret scanning (Gitleaks, truffleHog, or equivalent) as a CI/CD gate, and should explicitly prohibit credentials in these files.
Centralized secrets store for all MCP server credentials. Enterprise credentials for systems that MCP servers connect to (AWS keys, GitHub tokens, database passwords, Salesforce credentials, SaaS API keys) should be stored in HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or an equivalent platform-managed secrets store. Credentials should be retrieved at runtime, not loaded at startup and cached in process memory.
Automated rotation on a documented schedule. Astrix found that approximately 53% of MCP server deployments rely on long-lived static credentials. Long-lived credentials that are not rotated persist as a risk indefinitely after a compromise. IT policy should require automated rotation on a 60–90 day schedule for API keys and shorter schedules for database credentials, with documented emergency rotation procedures for suspected compromises.
The Anthropic API key itself requires governance. The key that allows Claude to run (the ANTHROPIC_API_KEY used by developers and in Claude Code deployments) is a high-value credential. Anthropic’s own guidance recommends 90-day rotation, spend limits per key to contain blast radius from leaked keys, and integration with GitHub’s secret scanning partner program, which automatically notifies Anthropic when a Claude API key is detected in a public repository. IT should treat the Anthropic API key with the same governance applied to any cloud provider credential.
Does a specific MCP server need permission to access credentials?
One of the most common questions IT teams receive is: “Does [specific MCP server] need permission to access our credentials?” The answer depends on what the server connects to. Notion MCP, for example, requires user-based OAuth authentication; the user authorizes Notion to share their workspace data with the MCP client through a standard OAuth consent flow. This is documented behavior. The Notion MCP server does not hold a static Notion API key; it holds an OAuth token scoped to what the user authorized. Notion’s Enterprise plan provides MCP Governance controls that allow administrators to restrict which AI tools can connect to the workspace at all.
The governance question for every MCP server is the same: What credentials does it hold? What permissions do those credentials carry? Who authorized that access and when? Is there a revocation process? A server that requires OAuth authorization and implements per-user scoped tokens is meaningfully more governable than one that uses a shared static API key. IT should document the credential model for every approved MCP server.
What ToolHive does: ToolHive’s Kubernetes Operator manages credentials for MCP servers centrally. Downstream credentials are stored in Kubernetes Secrets or an external secrets operator, not in developer configuration files or process environment variables. The embedded authorization server handles OAuth flows in-process against your enterprise IdP. Developers never see or store MCP server credentials, they authenticate through their IdP and receive scoped tokens. Credential rotation happens at the platform level without requiring updates to individual developer machines. (docs.stacklok.com)
Domain 2: Identity and Access Control
The problem
In a default Claude and MCP deployment, there is no per-user identity on tool invocations. Claude connects to an MCP server using a shared service account or API key. Every developer’s agent uses the same credential. The audit trail records “the GitHub MCP server was called”, not which developer’s agent called it, on whose behalf, to accomplish what goal.
This model fails the most basic requirements of enterprise identity governance: least-privilege access, per-user attribution, and revocability at the individual level. When a developer leaves the organization, IT cannot revoke that developer’s access to MCP-connected systems without rotating the shared credential, which affects every other developer simultaneously.
What IT should require
Single sign-on integration for all MCP access. Every developer who uses Claude with MCP servers should authenticate through the organization’s IdP (Okta, Entra ID, Google Workspace). MCP access should be tied to their organizational identity, not a personal API key or shared service account. When a developer’s account is offboarded from the IdP, their MCP access is revoked automatically.
Role-based access control at the tool level, not just the server level. Server-level RBAC (defining whether a user can use a server or cannot) is too coarse for enterprise governance. A developer who needs read access to a database should not receive write access simply because the database MCP server exposes both capabilities. IT should require that MCP platforms support tool-level RBAC, and that roles are defined and enforced by the platform, not by convention.
Per-request token validation, not session-level. An authenticated session that persists indefinitely is a stolen-token risk. Per-request token validation means every tool invocation validates the token’s signature, issuer, audience, expiry, and required scope. A revoked credential fails the next tool invocation, not the next session establishment.
Scoped permissions aligned with job function. A developer accessing GitHub via MCP for code review should have a different permission scope than an infrastructure engineer accessing Kubernetes via MCP for cluster management. Claude Code and Cursor deployments used by different teams should connect to role-scoped MCP configurations, not a single shared gateway with universal tool access.
What Stacklok does: Stacklok’s embedded authorization server runs in-process within the Stacklok proxy, handling the full OAuth 2.1 flow against Okta, Entra ID, or Google. Every tool invocation carries a verified, per-user identity, not a shared service account token. The Kubernetes Operator auto-provisions namespace-scoped RBAC resources for each MCP server. The vMCP Virtual MCP Server lets platform teams define role-scoped tool sets: a developer role sees only the tools relevant to their workflow; no individual server grants all tools to all users. (docs.stacklok.com/toolhive/updates/2026/02/16/updates)
Domain 3: Network Isolation
The problem
MCP servers run as processes that can make network connections to downstream systems. In a naive deployment, there are no network-level controls on what systems an MCP server can reach beyond whatever the developer’s machine can reach. A compromised MCP server (through prompt injection, a supply chain attack, or a code vulnerability) can reach any system accessible on the network from its execution environment.
The “NeighborJack” vulnerability class, documented in June 2025, found hundreds of MCP servers bound to 0.0.0.0 by default. These servers responded to initialization handshakes from any device on the same network, then functioned as open proxies to every downstream system their tools were permitted to reach. An attacker on the same corporate WiFi network as a developer could connect to an exposed MCP server and invoke any tool it exposed.
What IT should require
MCP servers should never bind to 0.0.0.0 in production. Servers accessible only to local processes should bind to 127.0.0.1. Servers meant to serve a team should be accessible only through an authenticated gateway, not directly exposed on the network.
Egress allowlisting per server. Each MCP server should be permitted outbound network access only to the specific backend systems its tools require. A GitHub MCP server needs outbound HTTPS to api.github.com and nothing else. A database MCP server needs outbound TCP to one specific database host on one specific port. Unrestricted egress from an MCP server process means a compromise becomes a pivot point into the internal network.
TLS for all remote MCP connections. All connections between MCP clients (Claude Code, Cursor, VS Code) and remote MCP servers must use TLS with certificates from a recognized certificate authority. Self-signed certificates are not acceptable in production. CVE-2025-6514 (CVSS 9.6) was partially exploitable because clients accepted unverified connections from MCP servers.
Process isolation between servers. Multiple MCP servers sharing a single host process means a vulnerability in one server’s dependencies can reach another server’s credentials and network access. Container isolation (each server in its own container with a separate network namespace and filesystem) is the correct runtime boundary for enterprise deployments.
What Stacklok does: Stacklok runs every MCP server in an isolated container with configurable network access and filesystem permissions defined in JSON permission profiles. On Kubernetes, the Operator deploys Kubernetes NetworkPolicy resources that allowlist specific egress destinations per server. All inbound traffic passes through the ToolHive proxy, which enforces authentication before forwarding requests to any server. No MCP server is directly accessible without transiting the authenticated gateway. (docs.stacklok.com/toolhive/guides-k8s)
Domain 4: Approved Server Registry and Supply Chain Governance
The problem
The MCP ecosystem has over 10,000 published servers as of April 2026. Most are community-developed, lightly reviewed, and distributed through npm and PyPI; these are the same package registries that have been targeted by supply chain attacks against AI infrastructure. In December 2025, Clutch Security found that approximately 38% of MCP servers in production are from unofficial sources, and 3% were found to contain hardcoded credentials that function as credential theft traps for developers who connect production keys.
Without a centralized approved server registry, individual developers are making independent decisions about which MCP servers to install and trust, often without security review. The Stacklok State of MCP report found that in a typical 10,000-person organization, 15.28% of employees are running an average of two MCP servers each. Most of those installations received no security review.
MCP-connected Agent Skills present a parallel supply chain risk. Snyk’s ToxicSkills audit in February 2026 found 1,467 malicious payloads across 3,984 scanned skills, a 36% flaw rate, with 76 confirmed malicious skills carrying active payloads. An organization that permits users to import arbitrary Skills from external sources has opened a supply chain attack surface with no visibility into what is being installed.
What IT should require
A centralized, admin-curated registry of approved MCP servers. IT should maintain an inventory of all approved MCP servers, including name, version, source repository, upstream vendor, downstream systems accessed, credential model, last security review date, and production deployment status. No server should reach production without appearing in this registry.
A documented approval workflow for new servers. New MCP server requests should require a security review that evaluates: the source repository and maintenance status, the downstream systems the server accesses and the permissions it requires, the credential model, and a software composition analysis (SCA) scan of the server’s dependencies. Review at the catalog level eliminates the need for per-deployment review and is the only scalable governance model.
Image signature verification and provenance attestation. MCP server container images should be cryptographically signed and should carry SLSA provenance attestations recording which source repository, commit, and build pipeline produced the image. Deployment pipelines should reject unsigned images. This is the only mechanism that detects post-publication tampering; a server image that was reviewed and signed but subsequently modified in the registry.
Restricted Agent Skills governance. IT should require that Agent Skills used in organizational Claude deployments come from an internal, reviewed Skills library or from an approved external source, not from unrestricted import. The Claude for Work admin console allows administrators to manage which Skills are available to users.
What Stacklok does: Stacklok’s Registry Server implements the official MCP Registry API and provides a curated catalog of approved servers. Administrators define which servers are available; developers discover and deploy from the portal. Servers outside the curated catalog are inaccessible to production workloads. Stacklok verifies image signatures and SLSA provenance attestations before deployment. The platform was founded by the team behind Kubernetes with supply chain security as a first-class engineering concern; the same discipline that produced the Kubernetes supply chain security ecosystem now applied to MCP servers. Stacklok’s open source project, ToolHive is Apache 2.0 licensed and auditable at github.com/stacklok/toolhive. (docs.stacklok.com)
Domain 5: Audit Logging
The problem
In a default Claude and MCP deployment, there is no meaningful audit trail of what Claude did, which MCP tools it invoked, what data it accessed, or on whose behalf it acted. The MCP specification does not define a standard for audit logging; the 2026 MCP roadmap identifies structured observability as a pre-RFC priority, acknowledging that enterprises are currently building their own logging implementations. Claude for Work provides OpenTelemetry log streaming for Claude.ai usage, with the critical caveat that prompts, MCP server names, and skill names are excluded from logs by default, limiting forensic utility.
Without an audit trail, IT security teams cannot answer the questions that compliance frameworks and incident response require: What did the agent do? Which systems were accessed? What data was read or modified? Which user authorized it? When exactly did it happen?
What IT should require
Every tool invocation must produce a structured log entry. The minimum viable entry contains: authenticated principal identity (not just server identity), tool name, sanitized input parameters (credentials stripped), result status, timestamp, and a trace ID correlating the invocation to a parent workflow. Generic “agent activity” logs that record invocations without user attribution are not an audit trail.
Logs must be forwarded in real time to a SIEM you control. Logs stored only on the MCP platform’s own infrastructure can be deleted or altered by an attacker who achieves platform access. Real-time forwarding to Splunk, Elastic, Datadog, or a similar SIEM you control removes this risk. For organizations subject to data retention requirements, the retention policy must be defined and enforced at the SIEM level, not at the MCP platform level.
OTel MCP semantic convention alignment for trace correlation. The OpenTelemetry MCP semantic conventions, merged into the OTel specification in January 2026, define standard attribute names for MCP tool invocations. Platforms that emit telemetry using these attributes produce spans that integrate with your existing observability stack without custom parsers. Non-standard attribute formats produce siloed log streams that cannot be correlated across multi-agent workflows.
Alert on anomalous tool invocation patterns. Audit logs are not useful post-incident if they were never monitored in real time. IT should configure alerting on: repeated authentication failures, tool invocations outside normal business hours or volume, tool invocations for systems the authenticated user has no documented business need to access, and outbound network connections from MCP server containers to unapproved destinations.
What Stacklok does: Stacklok’s telemetry aligns with OTel MCP semantic conventions as of March 2026. Every tool invocation is traced with authenticated principal identity, standard OTel attributes, and trace IDs compatible with Grafana, Datadog, Honeycomb, Splunk, and New Relic. Logs are forwarded through standard OTel and Prometheus pipelines to whatever SIEM the organization operates. The audit trail records not “the GitHub MCP server was called” but “Alice, authenticated via Okta, invoked the create_pull_request tool with these parameters.” (docs.stacklok.com/toolhive/updates/2026/03/09/updates)
The Enterprise IT Decision Framework
Use this framework to evaluate the current state of Claude and MCP in your organization and prioritize governance investments.
| Stage | Indicator | Priority Controls |
|---|---|---|
| Shadow deployment | Developers using Claude with MCP servers without IT involvement | Inventory all MCP servers; establish approval workflow; block unapproved servers via network policy or MDM |
| Pilot without governance | Approved pilot with per-developer credential management and no audit trail | Centralize credentials to a secrets store; implement SSO for MCP access; establish audit log forwarding |
| Team deployment without RBAC | Production deployment with shared credentials and server-level access control only | Implement per-user identity via IdP integration; deploy tool-level RBAC via vMCP or equivalent; define role-scoped server catalogs |
| Production with governance gaps | Audit logging exists but is not per-user attributed; supply chain controls absent | Add principal identity to audit logs; implement image signature verification; deploy curated server registry |
| Enterprise-ready | Per-user identity, tool-level RBAC, real-time SIEM forwarding, curated signed registry, centralized credentials, MDM-managed Claude Code policy | Ongoing: quarterly permission audits, automated rotation, anomaly alerting, CIS Controls v8.1 compliance verification |
Frequently Asked Questions
Need to securely connect Claude to MCP servers? Here are some additional questions to consider:
Stacklok is a Kubernetes-native, open-source (Apache 2.0) MCP governance platform that addresses the six domains in this guide at the infrastructure level. Platform teams deploy Stacklok once; every developer’s Claude connects to MCP servers through the Stacklok gateway. Credentials are centralized. Per-user identity flows from the enterprise IdP through the embedded authorization server to every tool invocation. Tool-level RBAC via vMCP enforces role-scoped access. OTel-aligned telemetry is forwarded to your existing SIEM. The curated Registry Server with image signature verification blocks unapproved servers. All runtime components run inside your Kubernetes cluster — no data transits Stacklok’s infrastructure. For enterprise teams ready to move from shadow deployment to governed production, the fastest path is a Stacklok deployment: docs.stacklok.com/toolhive/guides-k8s.
The Center for Internet Security published the MCP Companion Guide in April 2026, applying CIS Controls v8.1 to MCP-based systems. This is currently the most authoritative framework for enterprise MCP security governance. The guide identifies MCP as expanding identity, access control, logging, and application security surfaces, and maps specific CIS Controls to MCP deployment requirements. OWASP’s MCP Top 10, currently in beta as of April 2026, addresses the vulnerability taxonomy specific to MCP server implementations. Organizations subject to SOC 2, ISO 27001, or NIST CSF should map the controls in those frameworks to the six domains covered in this guide.
Yes, but it requires affirmative configuration rather than the default setup. The controls that limit Claude’s data access are: (1) scoped MCP server permissions: only connect MCP servers to systems that specific workflows need, not to all available systems; (2) tool-level RBAC via vMCP or equivalent: restrict tools to read-only operations unless writes are explicitly needed; (3) data classification controls: do not connect MCP servers to systems containing regulated data (PII, PHI, financial records) without a documented data handling assessment; (4) prompt content controls: Claude for Work’s admin console allows administrators to configure data handling policies for prompts sent to Anthropic. The default configuration connects Claude to maximum available context, which is appropriate for individual productivity but not for compliance-sensitive enterprise workflows.
API keys and OAuth tokens used by MCP servers should be stored in a dedicated secrets store, such as HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or equivalent, not in developer configuration files, environment variable files, or source code. The Anthropic API key itself should be governed like any cloud provider credential: rotation on a 90-day schedule, per-project scoping (not one key shared across the organization), and integration with spend limits to contain blast radius from key compromise. For enterprise teams using Stacklok, MCP server credentials are managed at the platform level, so developers never see or store downstream system credentials, and rotation happens centrally without updating individual developer machines.
April 29, 2026
Product Updates