The MCP Security Checklist: What to Verify Before You Ship an MCP Server to Production
The security posture of most MCP servers in production today is poor, not because the developers who built them were careless, but because the protocol’s rapid adoption outpaced the availability of structured guidance. The MCP specification defines how agents discover and invoke tools. It does not enforce authentication, validate inputs, or prevent a server from exposing every downstream system it can reach to any client that connects.
The numbers make this concrete. Among 2,614 MCP implementations surveyed by security researchers in early 2026, 82% use file operations that are vulnerable to path traversal attacks, and more than a third are susceptible to command injection.
This checklist covers the controls that must be verified before an MCP server handles production traffic. It is organized into seven domains: authentication, authorization and tool permissions, input validation and injection defense, secrets management, supply chain security, audit logging and observability, and network hardening. Each item includes what to verify, how to test it, and what a failure looks like in practice.
The OWASP MCP Top 10 and the OWASP GenAI Security Project’s “Practical Guide for Secure MCP Server Development” (February 2026) provide the framework behind this checklist. Where the MCP specification itself makes normative security requirements, those are cited directly.
Domain 1: Authentication
Authentication is the first and most commonly absent control. As of early 2026, only 8.5% of MCP servers in the ecosystem use OAuth. The remaining 91.5% rely on static API keys, shared tokens, or no authentication at all. Static API keys are difficult to rotate, impossible to scope per-request, and provide no identity signal for audit logs. They are not a substitute for a properly implemented OAuth flow.
1.1 — Require OAuth 2.1 with PKCE for all remote MCP servers
What to verify: Every remote MCP server requires authentication through a properly configured OAuth 2.1 authorization server. The March 2025 MCP specification update made OAuth 2.1 mandatory for remote servers. Anonymous connections must be rejected at the transport layer.
How to test: Attempt to connect without an Authorization header. Attempt to connect with an expired token. Attempt to connect with a token issued for a different audience. All three should result in 401 Unauthorized with no tool access.
Failure mode: Development endpoints left accessible without authentication. Fallback paths that accept API keys when OAuth fails. Token validation only at session establishment rather than per-request. An exposed server that returns a valid initialization handshake to an unauthenticated client is, by definition, an open proxy to its downstream systems.
Why PKCE matters: Proof Key for Code Exchange (PKCE) protects against authorization code interception attacks. The MCP specification requires PKCE for all clients. Without it, an attacker who can observe the authorization redirect can steal the authorization code and exchange it for a valid access token.
1.2 — Enforce per-request token validation
What to verify: Every tool invocation validates the token’s signature, issuer (iss), audience (aud), expiry (exp), and required scope. Access tokens must expire within minutes, not hours.
How to test: Establish an authenticated session. Manually expire or revoke the token server-side. Attempt a tool invocation with the now-invalid token. The invocation should fail with 401, not succeed because the session was previously established.
Failure mode: Session-level authentication with no per-request revalidation. Long-lived tokens (hours or days) that cannot be quickly revoked if a client is compromised.
1.3 — Implement redirect URI whitelisting
What to verify: The OAuth authorization server maintains a strict allowlist of permitted redirect URIs. Any redirect URI not on the allowlist causes the authorization request to fail immediately, before the user is shown a consent screen.
How to test: Initiate an OAuth flow with a redirect URI that differs by one character from a registered URI (e.g., evil.example.com vs. example.com). The flow should reject the request, not redirect the authorization code to the attacker-controlled URI.
Failure mode: Open redirect vulnerabilities that allow attackers to register MCP clients with malicious redirect URIs, intercept authorization codes during the OAuth flow, and exchange them for valid access tokens.
1.4 — Prevent confused deputy attacks in MCP proxy servers
What to verify: If your MCP server acts as a proxy to third-party APIs (a common pattern), it must not use a static client ID shared across all users when communicating with the upstream authorization server. Each user’s authorization flow must produce distinct, user-scoped tokens for upstream API calls.
How to test: Authenticate as User A and obtain an authorization code. Determine whether that code can be exchanged by a client acting as User B. If yes, the proxy is vulnerable to confused deputy attacks.
Failure mode: An MCP proxy where all users share a single OAuth client identity with the upstream API. Attackers can exploit consent cookie state to intercept authorization codes during the OAuth flow and use them to access other users’ connected accounts (Gmail, GitHub, Slack, databases).
Spec reference: The MCP specification’s security documentation explicitly identifies confused deputy vulnerabilities as a critical risk in proxy server architectures. Mitigation requires either per-user dynamic client registration or token exchange (RFC 8693) to maintain user-level accountability across the proxy boundary.
1.5 — Use non-deterministic, bound session IDs
What to verify: Session IDs are generated using a cryptographically secure random number generator (e.g., UUID v4). Session IDs are bound to the authenticated user’s internal identifier. Sessions are invalidated on logout and cannot be transferred between users.
How to test: Generate 100 session IDs and verify there is no predictable sequence or pattern. Attempt to use a session ID with a different user’s credentials and verify the server rejects it.
Failure mode: Sequential or timestamp-based session IDs that can be guessed. Shared queue architectures where event injection allows an attacker with a valid session ID to hijack another user’s session by injecting events into a shared stream.
Spec reference: The MCP specification states that MCP servers MUST NOT use sessions for authentication, MUST use secure non-deterministic session IDs, and SHOULD bind session IDs to user-specific information.
Domain 2: Authorization and Tool Permission Scoping
Authentication answers “who is this?” Authorization answers “what are they allowed to do?” The two controls are distinct, and treating a valid token as sufficient authorization is one of the most common MCP security failures.
2.1 — Enforce tool-level RBAC, not server-level access
What to verify: Access control is enforced at the individual tool level, not just at the MCP server boundary. A user or agent authorized to call read_record must not be able to call delete_record simply because both tools exist on the same server.
How to test: Create a role with access to read-only tools. Authenticate as that role. Attempt to invoke a write or delete tool. The invocation should fail with 403 Forbidden, not silently succeed or return an error from the downstream system after executing.
Failure mode: Server-level access control that grants all-or-nothing access to every tool on a server. In multi-agent workflows, this means a compromised specialist agent has the same blast radius as a fully privileged administrator.
2.2 — Apply least-privilege OAuth scopes per tool
What to verify: OAuth scopes requested during authorization map to specific tool permissions. A scope that grants access to read_contacts does not grant access to delete_contacts. Token scope is validated per tool invocation, not just per session.
How to test: Issue a token with a read-only scope. Attempt to invoke a write tool with that token. The server should reject the invocation based on insufficient scope, not execute it and return an upstream authorization error.
Failure mode: The “superuser trap”, where a single broad OAuth scope is granted to all connected agents, giving every agent access to every tool regardless of task context. This is the default pattern when teams use a shared service account token for all MCP server connections.
2.3 — Bind tokens to specific MCP servers using resource indicators
What to verify: Access tokens are issued with a resource parameter (per RFC 8707) that binds the token to the specific MCP server’s URI. The MCP server validates that incoming tokens were issued for its own URI, not for a different server.
How to test: Issue a token for MCP Server A. Present that token to MCP Server B. Server B should reject it with 401 because the audience (aud) does not match its own identifier.
Failure mode: Tokens stolen from one MCP server can be replayed against other MCP servers in the same environment. In a multi-server deployment, token exfiltration from a low-privilege server becomes a path to accessing high-privilege servers.
2.4 — Audit permissions quarterly and after every tool addition
What to verify: A documented, executed process reviews all OAuth scopes, tool-level permissions, and service account assignments at a minimum every quarter and within 48 hours of any new tool being added to an MCP server.
Failure mode: Permission creep in which tools are added during development but never removed from production. Service accounts that accumulated permissions across multiple deployment iterations with no removal process.
Domain 3: Input Validation and Injection Defense
Command injection is the dominant MCP vulnerability class, accounting for 43% of analyzed CVEs between January and February 2026. The root cause is consistent: MCP servers passing user-supplied input to shell commands, SQL queries, or downstream API calls without validation or sanitization. This is not a new class of vulnerability: it is SQL injection and OS command injection, applied to a new protocol surface.
3.1 — Validate all inputs against JSON Schema on every invocation
What to verify: JSON Schema validation is enforced for every MCP protocol message, every tool invocation input, and every tool output before it reaches the model. Schema validation rejects any message with missing required fields, wrong types, values outside permitted ranges, or unexpected additional fields.
How to test: Send a tool invocation with a missing required parameter. Send a message with an extra unexpected field. Send a parameter value that exceeds the defined maximum length. All three should be rejected.
Failure mode: Optional validation that is bypassed under error conditions. Validation only on inputs, not on outputs returned to the model. Schemas that accept type: "any" parameters, which is equivalent to no schema validation.
3.2 — Never pass model-provided input directly to shell commands or SQL queries
What to verify: Tool implementations that interact with databases, file systems, or external processes use parameterized queries and sanitized subprocess calls. Model-provided input is never interpolated directly into SQL strings, shell commands, or system call arguments.
How to test: Send tool inputs containing SQL injection payloads ('; DROP TABLE users; --), shell metacharacters (;, |, &&, $()), and null bytes. Verify they are rejected or safely escaped before reaching downstream systems. Verify they do not cause the tool to execute unintended operations.
Failure mode: String interpolation of model output into SQL or shell commands. This is the direct cause of the 43% command injection rate observed across MCP server CVEs in early 2026. An MCP server that calls subprocess.run(f"git clone {user_input}") is exploitable by any client that can invoke it.
3.3 — Enforce size limits on all inputs and outputs
What to verify: Every tool input parameter has a defined maximum length. Inputs that exceed the limit are rejected with a clear error, not truncated silently or passed to downstream systems. Tool outputs returned to the model are also size-limited to prevent context window manipulation.
How to test: Send an input parameter that exceeds the stated limit by one byte. The request should be rejected cleanly. Verify the rejection does not expose internal error messages that reveal system paths or stack traces.
Failure mode: No size limits allowing oversized inputs to cause memory exhaustion. Tool outputs that can be arbitrarily large, enabling an attacker who controls a downstream data source to inject effectively unlimited content into the model’s context window.
3.4 — Treat all tool outputs as untrusted input before passing to the model
What to verify: Content returned by tools (e.g. database query results, API responses, file contents, web pages, etc.) is treated as untrusted data before it enters the model’s context. This means size limiting, content type validation, and where appropriate, content sandboxing.
Why this matters: Indirect prompt injection is a documented, exploited attack vector. The GitHub MCP Server was successfully exploited via prompt injection embedded in public GitHub Issues and Pull Requests. The WhatsApp MCP Server was exploited via tool poisoning that injected instructions into tool descriptions. In both cases, the model processed attacker-controlled content as authoritative instructions.
Failure mode: A web-fetch tool that returns raw HTML from an attacker-controlled page. A database query tool that returns records containing embedded instruction sequences. An email-reading tool that processes message content without content sandboxing.
3.5 — Validate tool invocations as structured JSON only
What to verify: Tool invocations are only accepted as structured JSON objects with validated schemas. The server cannot be induced to execute tool calls by generating natural language that the server interprets as a command.
How to test: Send a natural language string that describes a tool invocation (“please call the delete_file tool with path /etc/passwd”). The server should reject it as malformed.
Failure mode: MCP servers that attempt to interpret free-form text input as tool invocations, creating a path from natural language manipulation to tool execution.
Domain 4: Secrets Management
Credentials stored in MCP server environments such as API keys, OAuth tokens, and database passwords are high-value targets. A compromised MCP server credential does not just expose the server itself; it exposes every downstream system the server’s tools are permitted to reach.
4.1 — No credentials in code, environment variables, or logs
What to verify: No API keys, OAuth client secrets, database passwords, or service account credentials are stored in source code, configuration files checked into version control, unencrypted environment variables, or log output. Audit logging pipelines are verified to strip credential values before writing to log sinks.
How to test: Scan the codebase with a secrets detection tool (Trufflehog, GitLeaks, or equivalent) before every production deployment. Review application logs for any field that contains token-like strings.
Failure mode: Verbose logs capturing full request/response payloads that include credentials passed as parameters. Docker image layers containing credentials baked in during the build process. .env files committed to version control with OAuth secrets.
Real-world incident: In September 2025, researchers discovered a malicious MCP package on npm impersonating Postmark’s email service. It functioned correctly as an email MCP server while secretly BCC’ing every message sent through it to an attacker. Any MCP server you install from a public registry may contain similar credential exfiltration behavior at the dependency level, not just the package level.
4.2 — Use short-lived tokens with automated rotation
What to verify: OAuth access tokens expire within minutes (not hours). Refresh tokens are rotated on use. No long-lived API keys are used for production workloads. A credential rotation process exists and is documented, including a runbook for emergency rotation if a credential is suspected to be compromised.
How to test: Issue an access token. Wait for its stated expiry. Attempt a tool invocation with the expired token. The server should reject it. Verify the rotation mechanism works by deliberately rotating credentials and confirming that active sessions either refresh transparently or require re-authentication.
Failure mode: Long-lived tokens that must be manually rotated. Rotation processes that require manual steps across multiple configuration files. No documented runbook for emergency rotation, meaning a compromised credential remains valid for days while the response team figures out how to invalidate it.
4.3 — Store downstream credentials in a dedicated secrets store
What to verify: API keys, OAuth tokens, and service credentials for downstream services (the APIs your tools call) are stored in a dedicated secrets store (e.g. HashiCorp Vault, AWS Secrets Manager, or equivalent) and retrieved at runtime with the connecting agent’s scoped token. Credentials are never stored in the MCP server’s application memory longer than the duration of the request that requires them.
How to test: Review the MCP server’s startup configuration. Verify no downstream credentials are loaded at startup. Verify they are retrieved from the secrets store per-request, scoped to the authenticated agent’s identity.
Failure mode: A single set of downstream credentials used for all agents, regardless of which agent is making the request. If any agent’s token is compromised, the attacker gains access to all downstream systems through the shared credential.
Domain 5: Supply Chain Security
The MCP server you install is not just the code you reviewed. It is every dependency that code loads, every transitive dependency of those dependencies, and every update that has been pushed since you last reviewed it. The supply chain attack surface for MCP servers is wide and actively exploited.
5.1 — Verify image signatures and provenance before deployment
What to verify: Every MCP server container image is cryptographically signed and carries a verifiable provenance attestation before it is deployed to production. Provenance attestations follow the SLSA format and record which source repository, commit, branch, and build pipeline produced the image. Deployment pipelines verify signatures before pulling images; unsigned images are rejected by admission control.
How to implement: Use Cosign to sign images and verify signatures at deployment time. Generate SLSA provenance attestations using the SLSA Container Generator for GitHub Actions (or equivalent for your build platform). Configure a Kubernetes admission webhook (Sigstore Policy Controller or Kyverno) to reject any pod that references an image without a valid attestation.
# Sign an image with Cosign
cosign sign --key cosign.key ghcr.io/your-org/your-mcp-server:v1.2.3
# Verify signature before deployment
cosign verify --key cosign.pub ghcr.io/your-org/your-mcp-server:v1.2.3
# Verify SLSA provenance attestation
cosign verify-attestation \
--type slsaprovenance \
--key cosign.pub \
ghcr.io/your-org/your-mcp-server:v1.2.3
Why SLSA levels matter for MCP servers: SLSA Build Level 1 establishes that provenance is generated and recorded. Level 2 adds cryptographic signing. Level 3 requires an isolated build environment and unforgeable provenance. For MCP servers handling sensitive data or high-privilege tool access, SLSA Level 2 is the minimum acceptable bar; Level 3 is the target for servers with production access to critical systems.
Failure mode: The first malicious MCP package discovered in the wild (September 2025) was a typosquat of Postmark’s npm package. It passed casual inspection because it worked correctly: the malicious behavior was additive, not substitutive. Signature and provenance verification catches post-build tampering that code review cannot detect.
5.2 — Pin all dependencies to exact versions with hash verification
What to verify: All direct and transitive dependencies are pinned to exact versions with cryptographic hash verification (not semver ranges). pip install requests>=2.28 in a production MCP server is a supply chain vulnerability; any future version of requests that introduces malicious behavior will be automatically installed on your next build.
How to implement: Use pip-compile with --generate-hashes for Python dependencies. Use npm ci with a committed package-lock.json for Node.js. Use Go module checksums (go.sum) for Go. Verify that your CI/CD pipeline fails if the dependency lock file is not present or if any hash does not match.
Real-world incident: In late 2025, a dual reverse-shell MCP package was discovered on npm with zero declared dependencies. The malicious code was not in the package itself, but it was downloaded each time npm install ran. Static dependency scanning reported “0 vulnerabilities.” Hash verification of the downloaded content would have detected the discrepancy.
5.3 — Maintain a signed, audited registry of approved MCP servers
What to verify: Your organization maintains a centralized inventory of all approved MCP servers, their versions, their source repositories, and their production deployment status. New MCP servers require a documented approval workflow, including security review of the server’s tool surface area before they can be deployed. Automated discovery detects shadow MCP deployments (servers installed outside the approval workflow) and alerts the platform team.
Failure mode: MCP server sprawl is when individual developers installing MCP servers on their machines or in development environments that gradually proliferate to production without security review. Research from Clutch Security found that in a typical 10,000-person organization, 15.28% of employees are running an average of 2 MCP servers each, most without any organizational governance. That’s approximately 3,000 servers and a lot of exposed surface area.
5.4 — Scan for known vulnerabilities in dependencies before every deployment
What to verify: Every production deployment runs a Software Composition Analysis (SCA) scan against the dependency manifest. Deployments with high or critical CVEs in dependencies are blocked by the CI/CD pipeline, not by human review.
How to implement: Integrate pip-audit (Python), npm audit (Node.js), or govulncheck (Go) into the CI/CD pipeline as a required gate. For container images, run trivy or grype against the final image layer before pushing to the registry. Configure the pipeline to fail on findings with CVSS score ≥ 7.0.
Domain 6: Audit Logging and Observability
An audit trail that does not exist before an incident provides no forensic value after one. Audit logging is consistently underinvested in MCP deployments because it produces no visible user-facing benefit until something goes wrong, at which point its absence is catastrophic.
6.1 — Log every tool invocation with identity, parameters, and result
What to verify: Every tool invocation produces a structured log entry containing: the authenticated user or agent identity, the tool name, the input parameters, the result status (success/failure/error), the timestamp, and a trace ID that can correlate the invocation to a parent request or workflow.
How to test: Invoke a tool. Query the log store. Verify the log entry exists and contains all required fields. Revoke the audit log access for a test user and verify the log entry still records their identity correctly.
Failure mode: Tool invocations logged at the framework level without identity attribution. Logs that record “tool X was called” without recording which authenticated principal called it. Parameters logged in full without credential stripping, causing credentials to be written to log files.
6.2 — Use OTel MCP semantic conventions for trace continuity
What to verify: Audit telemetry uses the OpenTelemetry MCP semantic conventions (merged January 2026). This ensures that MCP tool invocation spans use standard attribute names — mcp.method, mcp.server.name, mcp.tool.name, and related fields, enabling correlation of MCP invocation traces with the broader application traces in your observability stack.
Why this matters: OTel semantic convention alignment means your MCP server traces integrate with Grafana, Datadog, Honeycomb, Splunk, and New Relic using the same attribute names as every other OTel-instrumented component. Non-standard attribute names produce siloed log streams that cannot be correlated across an end-to-end workflow.
6.3 — Ship logs to a SIEM that is independent of the MCP server
What to verify: Audit logs are shipped in real time to a SIEM or log aggregation system that the MCP server itself cannot write to or modify after the fact. Logs stored only on the MCP server’s local filesystem can be deleted or tampered with by an attacker who compromises the server.
Failure mode: Log files stored on the same host as the MCP server, with no real-time forwarding. If the server is compromised, the attacker can delete or modify logs before the incident is detected.
6.4 — Alert on authentication failures, anomalous tool invocation patterns, and unexpected outbound connections
What to verify: Alerting rules are configured for: repeated authentication failures from a single identity within a time window, tool invocations outside normal business hours or volume, tool invocations for tools the authenticated identity has never called before, and outbound network connections from the MCP server process to IPs or domains not in an approved allowlist.
Failure mode: No alerting on authentication failures, meaning brute-force token attacks run silently. No anomaly detection on tool invocation patterns, meaning a compromised agent that begins exfiltrating data through a file-read tool generates no alerts until the data has already left the environment.
Domain 7: Network Hardening
7.1 — Never bind to 0.0.0.0 in production
What to verify: MCP servers in production bind only to specific, intended network interfaces, not 0.0.0.0. Servers accessible only to local processes should bind to 127.0.0.1. Servers accessible to specific internal services should bind to the specific internal IP or be accessed through a service mesh with mutual TLS.
How to test: Run netstat -tlnp or ss -tlnp on the production host. Verify no MCP server process is listed with 0.0.0.0 as the bind address.
Failure mode: In the “NeighborJack” vulnerability class (June 2025), MCP servers bound to 0.0.0.0 responded to initialization handshakes from any device on the same network. Since the MCP initialization handshake is predictable and well-documented, automated scanning can identify and connect to exposed servers with a single request.
7.2 — Enforce TLS for all remote connections; reject self-signed certificates
What to verify: All connections between MCP clients and remote MCP servers use TLS with certificates from a recognized certificate authority. Self-signed certificates are not accepted in production. Certificate chain validation is enforced.
How to test: Configure a test client to connect to a server presenting an expired certificate. Verify the connection is rejected. Configure a test client to connect over plain HTTP. Verify the connection is rejected.
Failure mode: MCP clients connecting to servers over HTTP or accepting self-signed certificates are vulnerable to man-in-the-middle attacks that can steal tokens, modify tool call responses, or inject commands into the communication stream. CVE-2025-6514 (a CVSS 9.6 remote code execution vulnerability affecting nearly 437,000 downloads) was partially exploitable due to clients accepting unverified connections.
7.3 — Run MCP servers in isolated containers with minimal network access
What to verify: Each MCP server runs in an isolated container with a network policy that permits only the specific outbound connections the server’s tools require. A filesystem MCP server should have no outbound internet access. A web-fetch MCP server should be permitted outbound HTTPS to specific domains, not unrestricted egress.
How to implement: Define Kubernetes NetworkPolicy resources that allowlist specific egress destinations per MCP server. Use seccomp profiles to restrict the system calls available to MCP server containers. Drop all Linux capabilities and add back only those explicitly required.
# Example: NetworkPolicy for a database-access MCP server
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mcp-db-server-egress
spec:
podSelector:
matchLabels:
app: mcp-db-server
policyTypes:
- Egress
egress:
- ports:
- protocol: TCP
port: 5432 # Postgres only
to:
- ipBlock:
cidr: 10.0.1.5/32 # Specific DB host only
Failure mode: A single MCP server container with unrestricted outbound internet access. If that server is compromised the attacker has outbound network access from within your internal network.
7.4 — Place MCP servers behind an authenticated reverse proxy or gateway
What to verify: MCP server endpoints are not directly accessible from outside the network perimeter. All inbound traffic passes through an authenticated reverse proxy or MCP gateway that validates tokens before forwarding requests to the server.
Failure mode: A valid MCP initialization response to an unauthenticated connection means the server is an open proxy. As security researchers noted, an exposed MCP server that completes a handshake “functions as an open proxy to every downstream system its tools are permitted to reach.” verified), FAIL (not implemented or partially implemented), or N/A (not applicable to this deployment with documented justification).
How Stacklok Addresses This Checklist
Running a checklist is one thing. Operating the infrastructure that enforces it consistently across every MCP server in your organization is another problem entirely. Stacklok’s ToolHive, the open-source (Apache 2.0) MCP platform, automates several of the hardest items on this list at the platform level, so individual server developers do not need to re-implement them per server.
Container isolation and network hardening (7.1, 7.3, 7.4): Stacklok runs every MCP server in an isolated container with configurable network access and filesystem permissions via JSON permission profiles. Stacklok’s Kubernetes Operator auto-provisions a dedicated ServiceAccount, Role, and RoleBinding for each MCPServer resource with minimal permissions, namespace-scoped, no manual ClusterRole configuration required.
Embedded authorization server (1.1, 1.2, 4.2): Stacklok’s embedded authorization server runs in-process within a proxy, handling the full OAuth flow against Okta, Entra ID, or Google. Token issuance, validation, and exchange happen inside your cluster with no credentials stored on developer machines.
OTel MCP semantic convention alignment (6.2): As of March 2026, Stacklok’s telemetry aligns with the OpenTelemetry MCP semantic conventions merged in January 2026. Traces and metrics use standard attribute names compatible with Grafana, Datadog, Honeycomb, Splunk, and New Relic, so you can integrate an audit trail into your existing observability stack.
Supply chain attestation (5.1): Stacklok is led by Craig McLuckie (Kubernetes co-founder), with deep roots in supply chain security tooling. Server signing and provenance attestation are first-class platform concerns, reflecting the same security engineering discipline that produced the Kubernetes supply chain security ecosystem.
Curated registry with admin control (5.3): Stacklok’s Registry Server implements the official MCP Registry API. Platform administrators curate a trusted catalog of approved MCP servers; developers discover and deploy from the portal. Servers outside the curated catalog are not accessible to production workloads.
Frequently asked questions
Working through your MCP security checklist? Here are some additional questions to consider.
No. The MCP specification’s own security documentation states this directly: “there is no complete defense against prompt injection.” The mitigations in this checklist, including treating tool outputs as untrusted input, sandboxing external content, enforcing structured tool invocations, etc. reduce the probability and impact of successful prompt injection but do not eliminate it. Layered defense is the only viable strategy.
SLSA (Supply-chain Levels for Software Artifacts, pronounced “salsa”) is a framework from the Open Source Security Foundation that defines incremental levels of build integrity guarantees. For MCP servers, SLSA provenance attestations provide cryptographic evidence of which source repository, commit, and build pipeline produced a given container image. Because the MCP supply chain attack surface is actively exploited with documented cases of malicious packages impersonating legitimate MCP servers on npm, SLSA attestations are the only mechanism that can verify an MCP server image has not been tampered with between build and deployment.
Partially. Items 1.1 through 1.5 (OAuth 2.1, redirect URI whitelisting, confused deputy mitigations) apply primarily to remote MCP servers using HTTP/SSE transport. Local STDIO servers have a different authentication surface. However, items in Domains 3 (input validation), 4 (secrets management), 5 (supply chain security), and 6 (audit logging) apply regardless of transport. A local STDIO server that passes model-provided input directly to shell commands, stores credentials in environment variables, or lacks audit logging is still vulnerable.
The full checklist should be re-run before every major version deployment, after any change to the server’s tool surface area or downstream system connections, and at a minimum quarterly. Items 2.4 (permission audit) and 5.4 (SCA scan) should run on every deployment as automated CI/CD gates, not just on a quarterly schedule.
March 21, 2026