k8s
Overview
The k8s MCP server is a Model Context Protocol (MCP) server that enables AI assistants and agents to interact directly with Kubernetes clusters through a structured, AI-friendly interface. It allows AI-driven workflows to inspect cluster state, explore resources, and reason about workloads without switching tools or manually using kubectl or the Kubernetes dashboard.
This server is ideal for platform engineering, operations, troubleshooting, and infrastructure-aware AI workflows built on Kubernetes.
Transport
streamable-http
Tools
Key Capabilities
- Cluster visibility — Explore namespaces, workloads, and resources programmatically.
- Workload introspection — Inspect pod status, deployment configuration, and runtime metadata.
- Operational troubleshooting — Support AI-assisted diagnosis of common Kubernetes issues.
- Topology understanding — Reason about how services, pods, and controllers relate to each other.
- Context-aware automation — Enable higher-level workflows that depend on real cluster state.
How It Works
Check out our Kubernetes MCP server guide.
The k8s MCP server runs as a local or in-cluster MCP service and connects to Kubernetes using standard authentication mechanisms such as kubeconfig files, in-cluster service accounts, or managed identity integrations. AI clients communicate with the server over the MCP protocol to request cluster context as part of broader reasoning workflows.
The server mediates access to the Kubernetes API, handling authentication, request execution, and response normalization. Results are returned in structured formats that AI assistants can reason over directly, while respecting Kubernetes RBAC policies and namespace boundaries.
By exposing Kubernetes through MCP, the server enables AI-driven workflows such as cluster exploration, workload inspection, and guided troubleshooting — all through natural language and automated reasoning within a single environment.