launchdarkly

Official
Local
18
Signed
GitHub Repo

Overview

The launchdarkly MCP server is a Model Context Protocol (MCP) server that enables AI assistants and agents to interact directly with LaunchDarkly, the feature management and experimentation platform. It allows AI-driven workflows to inspect feature flags, environments, and targeting rules, helping teams reason about feature rollout, experimentation, and risk without switching tools or manually navigating the LaunchDarkly UI.

This server is especially useful for release management, experimentation analysis, and operational decision-making where feature flag context is critical..

Transport

stdio

Tools

  • create-ai-config
  • create-ai-config-variation
  • delete-ai-config
  • delete-ai-config-variation
  • get-ai-config
  • get-ai-config-targeting
  • get-ai-config-variation
  • list-ai-configs
  • update-ai-config
  • update-ai-config-targeting
  • update-ai-config-variation
  • get-code-references
  • get-environments
  • create-feature-flag
  • delete-feature-flag
  • get-feature-flag
  • get-flag-status-across-environments
  • list-feature-flags
  • update-feature-flag

Key Capabilities

  • Feature flag visibility — Explore flags, variations, and environments programmatically.
  • Release awareness — Understand how features are rolled out across environments and audiences.
  • Risk reduction — Help AI assistants reason about blast radius and targeting before changes are made.
  • Experimentation insight — Surface contextual information about experiments and controlled rollouts.
  • Operational decision support — Enable assistants to answer questions like “who sees this feature?” or “is this flag safe to change?”

How It Works

The launchdarkly MCP server runs as an MCP service and connects to LaunchDarkly using an authenticated API token with appropriate scopes. AI clients communicate with the server over the MCP protocol to request feature flag and environment context as part of broader reasoning workflows.

The server mediates access to LaunchDarkly’s APIs, handling authentication, request execution, and response normalization. Results are returned in structured formats that AI assistants can reason over directly, while ensuring that all access respects LaunchDarkly’s project and environment permissions.

By exposing LaunchDarkly through MCP, the server enables AI-driven workflows such as feature rollout review, targeting analysis, and experimentation-aware decision-making — all through natural language and automated reasoning within a single environment.