Stacklok and SUSE bring Linux infrastructure management to your AI agent
Managing Linux infrastructure at scale is hard enough without having to context-switch between your AI coding assistant and a web UI every time you need to check on a system. Stacklok and SUSE have been collaborating to make this all much simpler.
Here’s what you’ll learn in this post:
- Why Stacklok and SUSE are collaborating, and what it means for enterprise teams
- How two new SUSE MCP servers in the Stacklok registry let you manage Linux infrastructure through natural language
- How a developer can use these servers day-to-day to identify vulnerabilities and coordinate patch schedules without leaving their AI assistant
A shared foundation: open source, interoperability, and community
Stacklok and SUSE are committed to open source as the foundation for enterprise software, and both believe that community-driven interoperability is how you build infrastructure that lasts in a fast-moving market. As our CEO, Craig McLuckie likes to say, “If you want to go fast, go alone. If you want to go far, go together.”
That shared foundation underpins our recent collaboration around the Model Context Protocol. MCP is rapidly becoming the standard for how AI agents connect to the tools and systems developers rely on. And that places more importance on enterprises curating and managing a central registry of trusted MCP servers. A server that isn’t in the registry is a server that security teams can’t easily approve, and one that developers are less likely to use.
Two new SUSE MCP servers, now in the Stacklok registry
We’ve added two SUSE MCP servers to the Stacklok vetted registry, which is part of our open source MCP platform, ToolHive. Both servers expose the same set of capabilities for managing Linux infrastructure through an LLM; the difference is which SUSE product they connect to.
SUSE Multi-Linux Manager MCP Server connects to SUSE Multi-Linux Manager, SUSE’s commercial infrastructure management solution. It ships as a hardened container image from the SUSE registry and is the right choice for teams already running SUSE Multi-Linux Manager in production.
Uyuni MCP Server connects to Uyuni, the open source upstream project that underpins SUSE Multi-Linux Manager. It’s available on GitHub and is a natural fit for teams running Uyuni or evaluating the stack before committing to a commercial deployment.
Both servers are designed to run as containers, either locally in stdio mode or as a persistent HTTP service for multi-user environments. Both are licensed under the Apache License 2.0.
What these servers can do
Once connected, these servers expose a rich set of tools to your LLM:
- Inspect infrastructure — list active systems, retrieve system details, look up event history, and find systems by hostname or IP address
- Identify what needs attention — check all systems for pending updates, find systems exposed to a specific CVE, and list systems that require a reboot
- Take action — schedule patch applications, apply specific errata, schedule reboots, and cancel previously scheduled actions
- Manage system groups — create groups, add or remove systems, and list group membership
- Onboard and offboard systems — bootstrap new systems using activation keys and decommission systems from management
Both servers give security-conscious teams a clear opt-in boundary between read and write access. Write actions (scheduling patches, reboots, system changes) are disabled by default, and have to be explicitly enabled in your Stacklok registry configuration.
What this looks like in practice
Here are two examples of how a developer or sysadmin might use these servers as part of their daily workflow.
Example 1: CVE triage without leaving your IDE
A new CVE lands in your team’s security feed. Normally, you’d open the SUSE Multi-Linux Manager web UI, search for affected systems, cross-reference your patch status, and then open a ticket for the ops team. With the MCP server connected to your AI assistant, you can do all of this in one place.
Ask your assistant: “Which of our managed systems are affected by CVE-2025-12345 and don’t have the fix scheduled yet?”
The server calls list_systems_needing_update_for_cve and list_all_scheduled_actions, then surfaces a clear answer. You can follow up: “Schedule the patch for the dev group first, then production after the weekend.” With write tools enabled, the server calls schedule_pending_updates_to_system for the relevant systems and confirms the scheduled actions all without navigating a single UI.
Example 2: Pre-release environment health check
Before a release, your team wants to confirm that the target environment is fully patched and none of the systems need a reboot. This is the kind of repetitive, error-prone task that burns time.
Ask your assistant: “Give me a health summary of the systems in the staging group, including any pending updates, outstanding CVEs, or reboot requirements?”
The server chains list_group_systems, check_all_systems_for_updates, and list_systems_needing_reboot to produce a consolidated summary. If anything needs attention, you can act on it immediately: “Schedule the outstanding updates for all systems in staging.” What used to require multiple UI screens and manual cross-referencing becomes a single conversation.
Both of these workflows work the same way whether you’re connected to Uyuni or SUSE Multi-Linux Manager; the MCP interface is consistent across both servers, so you’re not locked into one deployment model.
Infrastructure management belongs in your AI workflow
Stacklok’s registry isn’t just a list of MCP servers; it’s a governed, vetted catalog that enterprise teams can actually trust. Adding SUSE’s servers to that registry is a concrete step toward a world where AI agents can act across your full infrastructure stack, safely and with appropriate controls.
This is the first milestone in our collaboration with SUSE, and we’re looking forward to building on it.
Want to see what Stacklok can do for your organization? Book a demo or get started right away with ToolHive, our open source project. Join the conversation and engage directly with our team on Discord.
April 22, 2026