Blog

Applying lessons learned from building Kubernetes to software supply chain security

Minder is an open source platform for automating your application security posture. It is also, under the covers, a practical way to establish control loops (i.e., enforce policy) that are level triggered (i.e., pushing the resources into the state expressed in the policy) against resources/entities.

Author: Craig McLuckie
/
5 mins read
/
Jan 18, 2024
Minder

I had the privilege to work on Kubernetes from the earliest days. One of the things that I remember bending my mind was the control paradigm that Kubernetes instantiated, popularized and ultimately genericized.

I am sure there were manifestations of it in other distributed systems before Kubernetes, but the combination of independently scoped controllers, performing level-triggered reconciliation against labeled resources (with independent label selectors) was pretty groundbreaking. It led to some very powerful capabilities in the cloud native space, and when genericized (through custom resource definitions) found applicability in all kinds of interesting ways (for example, in provisioning cloud infrastructure with things like Crossplane).

I don’t know who the primary progenitor of this pattern was, but I do remember Brendan Burns (who was a polymath and, in a previous life, a robotics professor) explaining it in a way that got through to me:

In short, when building a complex distributed control system, you want it to be level triggered vs edge triggered, because if you miss an event, you don’t want to have to worry about state reconstruction. You want your controllers to drive your system to a declarative good state, vs creating a reactive system that just responds to triggers.

You want your controllers to be scoped to a specific resource type, so that the system is flexible, scalable, and pluggable.

You want to reason about things (entities or resources) that are in and out of scope of responsibility for a controller, based on labels associated with those resources. This creates optimal flexibility—forced hierarchy is, after all, the path to madness—but also offers an elegant way to integrate the operator of a system (who ultimately is empowered to label resources) with the autonomous system. Want something to stop being managed by something? Just remove the label.

This made a lot of sense, and I think is in a large part why Kubernetes has been so successful and durable.

So naturally when thinking about the problems of establishing controls across the SDLC (aka securing the software supply chain), it made sense to look for inspiration in something that was familiar as we were building out Minder

Minder is an open source platform for automating your application security posture. It is also, under the covers, a practical way to establish control loops (i.e., enforce policy) that are level triggered (i.e., pushing the resources into the state expressed in the policy) against resources/entities.

We are starting with GitHub repos as the resources/entities that we secure, but eventually branching out as we build other providers.

Minder's parallels with Kubernetes

Let’s look at how Minder works, and at the parallels it shares with Kubernetes as a control system.

The starting point for Minder is a Provider. The provider is essentially an interface between Minder and a resource type (or entity type) in the SDLC. For example, GitHub is a provider, and a GitHub repo is a resource type. A Kubernetes cluster might be a resource type.  A build pipeline might be a resource type. An OCI registry might be a resource type.  

Minder handles connectivity (auth*) and supplies an eventing system (yes, we do have events to trigger rule/policy assessments), but the actions themselves reason about “levels,” i.e., they assess state based on observable state that can be acquired via live APIs. 

So now we have a system that normalizes resources, makes their configuration accessible for evaluation, and has the ability to trigger processing, to make sure that you are still aligned with policy when something changes—like when a repo config setting changes, or someone submits a pull request. This is very similar to the Kubernetes model.  For efficiency, we drive policy assessment at event edges (i.e., policy is assessed when something changes, but we don’t enforce policy based on event payload). We will also periodically assess policy, in future implementations. 

The eventing subsystem connects providers to rule enforcement/execution subsystems.  Right now we use MillWheel, but this might change in the future, depending on what the community needs.

Finally, we have rules. Rules are expressed in common policy languages like jq or Rego, since we don’t want to force a specific language on folks.  Events trigger the evaluation of rules, or they may simply be evaluated periodically to make sure things are in conformance. The rules specifications are powerful: they describe not only what things should look like, but also what actions should be taken if things are out of conformance. 

Rules are currently associated with all resources in a project, which is a pretty coarse-grained model. But we are absolutely planning to head down the path where rules can be mapped to specific things, like: 

  • Properties, or things that can be derived from the resource itself (for example, “this is a golang repo”) 

  • Labels

  • Tags

The studious will also note that we are cutting corners a little. For state management, rather than using a system like etcd (which, by its nature, is fully distributed and avoids a single point of failure), we are using a postgres instance, Postgres is well understood from an operations perspective. Because we built Minder to run on Kubernetes, which provides an ‘ideal’ environment that handles things like node failures we don’t have to concern ourselves with things like failover behaviors. 

Use cases

It is pretty thrilling to see the team building a new system (in the open) that brings many of the learnings of backend platform engineering to the fun world of development workflow. We think it can enable some powerful use cases—for example:

  • Today, for GitHub: 

    • Make sure that no new dependencies are added to a project that include high severity CVEs in the transitive dependency chain, or that are not appropriately licensed

    • Make sure that branch protection is enabled for all my hundreds of repositories, and enable Github Advanced Security features on all repositories labeled ‘prod’

  • In the future, for Kubernetes

    • Make sure that nothing is deployed into a Kubernetes cluster that isn’t signed, or includes a high severity CVE

    • Trigger a workflow that submits a PR requesting an update (and full CD cycle) when something running in your Kubernetes environment has a new vulnerability discovered in it.

Getting started with Minder

Here are some ways you can learn more and get started with Minder: 

We have made some really good progress with Minder, but still have a long way to go to fully materialize the vision. We invite you to join the community and be a part of the journey. Feedback is always hugely welcome and appreciated.

Craig McLuckie is Stacklok's CEO and the co-creator of the open source project Kubernetes.