cloud-run

Official
Local
532
GitHub Repo

Overview

The cloud-run MCP server is a Model Context Protocol (MCP) server that allows AI assistants and agents to interact directly with Google Cloud Run services through a structured, AI-friendly interface. It enables AI-driven workflows to discover services, inspect configurations, deploy revisions, and manage runtime state without switching to the Google Cloud Console or writing custom integration code.

This server is well suited for cloud operations, service management, deployment inspection, and DevOps workflows centered on Cloud Run.

Transport

stdio

Tools

  • deploy_file_contents
  • deploy_local_files
  • deploy_local_folder
  • deploy_container_image
  • list_services
  • get_service
  • get_service_log
  • list_projects
  • create_project

Key Capabilities

  • Service discovery — Explore Cloud Run services and revisions across projects and regions.
  • Deployment visibility — Inspect service configuration, traffic splits, and revision history.
  • AI-driven deployments — Trigger deployments or updates as part of conversational or automated workflows.
  • Operational insight — Retrieve runtime metadata and configuration for debugging or auditing.
  • Secure, permission-aware access — Uses Google Cloud IAM so agents only act within authorized scopes.

How It Works

The cloud-run MCP server runs as a local or containerized MCP service and authenticates with Google Cloud using standard mechanisms such as Application Default Credentials, gcloud CLI credentials, or service accounts. Once authenticated, the server exposes Cloud Run management operations as MCP tools.

When an AI assistant invokes a tool the server translates the MCP request into a call to the Google Cloud Run API, executes it on behalf of the user, and returns structured results over the MCP protocol. This abstraction allows AI agents to reason about Cloud Run services — including deployment state, revisions, and configuration — without embedding Google Cloud SDK logic.

By making Cloud Run APIs appear as native tools inside AI workflows, the server enables use cases such as “list all services running in this project,” “inspect the last deployment,” or “roll out a new container image” through natural language and AI-driven automation.