Blog

Securing Infrastructure as Code: Step One, Setting up IaC

Beginning of a series of posts on securing Terraform infrastructure as code. In this post, we set up a basic self-managed Terraform pipeline.

Author: Evan Anderson
/
7 mins read
/
Aug 9, 2023

(This is the first in a series on managing and securing Infrastructure as Code (IaC). In this post, we’ll lay a foundation for later posts by describing how to bootstrap an IaC configuration using GitHub Actions)

Software supply chains come in all shapes, sizes, and colors (what, your code doesn’t have colors?!).  While most people think of them as delivering applications to end users, some software delivers applications to the platforms they run on.  And those supply chains need to be secured, too!  Delving into all aspects of securing your Infrastructure as Code supply chain would be a very long blog post indeed, so instead we’re going to break it up into a series. This week, we’ll be laying the foundation for future security-focused content.  In the meantime, this post will provide a handy reference if you’ve been tasked with setting up a new IaC repo.

What is Infrastructure as Code?

Cloud computing allows organizations to provision and manage virtual infrastructure (computers, networks, storage, etc) in ways that it is much harder to do with physical counterparts.  One example of this is directly using schematics of the desired infrastructure configuration to provision the actual infrastructure – a process known as Infrastructure as Code.  Rather than needing to plan the layout of physical racks, cooling ducts, electrical and network cabling, etc, we can simply say “I need 27 servers with 8 threads and 16GB of RAM connected on a LAN, with 20 running disk image X, 5 running disk image Y, and one each running images Z and W”.  Not only does this take a lot of the cost and expense out of trying new configurations, automatically mapping specifications to provisioned infrastructure can help improve the reliability, collaboration, and efficiency of infrastructure operations.

In these examples, we’re going to use the Terraform configuration language to provision infrastructure on AWS.  The general process is broadly similar on different cloud providers, though I’ll call out AWS-specific items where they are present.

While it’s possible to build infrastructure as code which simply sits in a repository and allows someone with the appropriate permissions to check out the code and run terraform manually, we’re going to describe how to set up Terraform to be run automatically from GitHub Actions.

Desired setup

Our goal is to provision multiple AWS services from a single set of Terraform configuration in a GitHub repo, mapping the identity of the GitHub Action to an AWS role.  In particular, we’d like to manage the mapping from GitHub identity to AWS Role using the terraform code in the repo, such that we can reproduce this authorization in the future if something becomes corrupted.  Additionally, storing this configuration in the repo acts as valuable documentation for the future.

The above picture looks fairly simple, but there are a few obstacles we’ll need to tackle to achieve our desired result:

  1. We can’t use the GitHub identity to establish the initial mapping, because that identity isn’t trusted by AWS.  We’ll cover this in the next section, Bootstrapping

  2. Even after we have the GitHub trust established, the AWS Role doesn’t have permission to do anything.  We’ll cover this next, in Access Management.

  3. Finally, Terraform needs to track the state of the deployment in its state file.  The state file is used (for example) to record resource IDs that are generated by the cloud provider but managed by Terraform so that tool is not constantly deleting and re-creating resources.  We’ll address this at the end of this article with State Storage.

Beyond the three challenges mentioned above, you’ll also need to set up your base Terraform definitions; a simple configuration might look like:

JavaScript
terraform {
  required_version = ">= 1.5.2"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.7"
    }
  }
}

# Configure the AWS Provider.
provider "aws" {
  region = "us-east-1"
  # If running e.g. terraform plan manually,
  # set AWS_PROFILE if not default.
}
Bootstrapping

While our ideal state seems pretty simple, mapping the identity of a GitHub workflow to an AWS Role requires a bit of OpenID configuration on the AWS side.  Fortunately, GitHub has documented this in their GitHub Actions documentation, including specific mappings for AWS.  This rule lines up with the tokens created by the aws-actions/configure-aws-credentials action that we’ll use later.  In particular, it provisions an IAM OpenID Connect provider in IAM using the GitHub OpenID URL and thumbprints, and then adds an IAM role with a policy that allows tokens for workflows from this repo (update the $MYREPO reference) to be exchanged for AWS Role tokens:

JavaScript
resource "aws_iam_openid_connect_provider" "github" {
  url             = "https://token.actions.githubusercontent.com"
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = ["6938fd4d98bab03faadb97b34396831e3780aea1",
                     "1c58a3a8518e8759bf075b76b750d4f2df264fcd"]
}

resource "aws_iam_role" "actions_iam_role" {
  name = "github_actions_role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Principal = {
          Federated = aws_iam_openid_connect_provider.github.arn
        },
        Action = "sts:AssumeRoleWithWebIdentity",
        Condition = {
          StringEquals = {
            "token.actions.githubusercontent.com:aud" : "sts.amazonaws.com",
          },
          StringLike : {
            # Can also use repo:$MYREPO:ref:$REFNAME,
            # see GitHub docs for an example
            "token.actions.githubusercontent.com:sub" : "repo:$MYREPO:*"
          }
        }
      }
    ]
  })
}

You can make use of claims like workflow or runner_environment to further limit which GitHub Actions can assume a particular role and, using infrastructure as code, create different roles for different GitHub Actions.

Access Management

In our initial plan, we described using Terraform to manage all of our GitHub Actions.  Now that we have the ability to assume an AWS Role, we’ll need to grant the role permissions to manage all of the permissions needed.  Note that FOR BOOTSTRAPPING PURPOSES, we’ll be granting the role permissions on a broader set of resources than is probably wise – the ability to create and attach any policy to any role.  We’ll lock this down in the future, but starting with this “small” policy will help build understanding of what’s going on as we get more complicated in the future.

JavaScript
resource "aws_iam_policy" "core_management" {
  name        = "infra_core_permissions"
  description = "Permissions for managing IAM via Terraform"
  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "iam:*OpenIDConnectProvider",
          "iam:ListOpenIDConnectProviders",
          "iam:CreatePolicy",
          "iam:DeletePolicy",
          "iam:GetPolicy",
          "iam:ListPolicies",
          "iam:CreatePolicyVersion",
          "iam:DeletePolicyVersion",
          "iam:ListPolicyVersions",
          "iam:GetPolicyVersion",
          "iam:CreateRole",
          "iam:DeleteRole",
          "iam:GetRole",
          "iam:ListRole",
          "iam:AttachRolePolicy",
          "iam:DetachRolePolicy",
          "iam:GetRolePolicy",
          "iam:ListRolePolicies",
          "iam:ListAttachedRolePolicies",
        ],
        Resource = "*"
      },
    ]
  })
}

resource "aws_iam_role_policy_attachment" "core_management" {
  role       = aws_iam_role.actions_iam_role.name
  policy_arn = aws_iam_policy.core_management.arn
}

Note that we create two resources here: we define an IAM policy which can be applied to one or more identities in AWS, and then we attach the policy to the specific IAM Role which we declared in Bootstrapping.

State Storage

Because GitHub Actions are ephemeral (each run uses a new, fresh worker), we’ll need to use Terraform’s external state management tooling rather than the default local terraform.tfstate file used by default.  Fortunately, Terraform supports storing state in AWS S3 with the appropriately-named s3 backend.  Because S3 blobs are eventually consistent, the s3 backend also needs access to a DynamoDB table to perform locking and to track the expected checksum of the latest s3 file.

Enabling the s3 backend is fairly straightforward, but you will need to choose a globally-unique S3 bucket name, which we’ll need again later on.  I’ve chosen $MY_GLOBALLY_UNIQUE_NAME, so you’ll need to pick another one for your own usage.

JavaScript
terraform {
  # ...
  backend "s3" {
    bucket         = "$MY_GLOBALLY_UNIQUE_NAME"
    key            = "github-actions.tfstate"
    dynamodb_table = "locks"
    region         = "us-east-1"
    encrypt        = true
  }
}

Having added this backend storage configuration, we also need to update our IAM policy to allow our service account to access this state storage.  If we only have a single Role that we’re connecting from GitHub, we can add it to our existing Policy definition from Access Management.  If you’re provisioning multiple AWS roles for different Terraform invocations, you may want to separate this into its own Policy so that you can attach the policy to multiple roles without granting too many permissions.  In that case, you may want to replace the github-actions.tfstate file reference on the s3:*Object calls with a wildcard glob to allow different actions to access their own state file in the same S3 bucket.

JavaScript
resource "aws_iam_policy" "core_management" {
  # ...
  policy = jsonencode({
    Statement = [
      # ...
      {
        Effect = "Allow",
        Action = [
          "s3:*",
        ],
        # This only covers one object (the bucket, so s3:* is reasonable)
        Resource = "arn:aws:s3:::$MY_GLOBALLY_UNIQUE_NAME"
      },
      {
        Effect = "Allow",
        Action = [
          "s3:GetObject",
          "s3:PutObject",
          "s3:DeleteObject",
        ],
        Resource = "arn:aws:s3:::$MY_GLOBALLY_UNIQUE_NAME/github-actions.tfstate"
      },
      {
        Effect : "Allow",
        Action : [
          "dynamodb:DescribeTable",
          "dynamodb:DescribeContinuousBackups",
          "dynamodb:DescribeTimeToLive",
          "dynamodb:ListTagsofResource",
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:DeleteItem",
        ],
        # We wildcard account ID to avoid hard-coding in the config
        Resource : "arn:aws:dynamodb:us-east-1:*:table/locks"
      }
    ]
  })
}
Conclusion

Thanks for making it this far!  In this episode, we got our Terraform code ready to run on GitHub Actions.  You’ll note that we didn’t actually check in any workflow configuration to actually run Terraform yet… that will come in the next episode, where we’ll also talk a bit about different ways to configure those actions.

Securi-Taco Tuesday livestream recap: How code signing and Sigstore secure the software supply chain

Stacey Potter /
Sep 3, 2024
Continue Reading

Cross-platform RAT deployed by weaponized 'requests' clone

Luke Hinds / Poppaea McDermott /
Aug 30, 2024
Continue Reading

Now available in Trusty: Vulnerability and license information for open source packages

Megan Bruce /
Aug 27, 2024
Continue Reading