Scalable Terraform Patterns: Reuse and Repeatability

A colleague asked me about native Terraform constructs enabling reuse and repeatability: “You mean modules and whatnot?,” he asked. Essentially yes, though it’s worth elaborating a bit on both modules and all the whatnot. This is my overview of Terraform’s three main mechanisms for reuse and repeatability.

Child modules: Generic, composable “recipes”

Terraform child modules offer a mechanism for abstracting, packaging, and re-using common Terraform resource configurations across multiple distinct Terraform projects. Child modules expose a simple interface to a more complex underlying configuration, similar to a programming language library or class; they’re generic abstractions of opinionated Terraform “recipes:”

graph LR;
  A[TF project 1]-->|apply|cloud-provider[cloud provider];
  B[TF project 2]-->|apply|cloud-provider;
  C[TF project 3]-->|apply|cloud-provider;
  D[TF project 4]-->|apply|cloud-provider;

  E[TF module]-->A


  • often have their own build/test/version/release CI/CD lifecycle separate and independent from (and agnostic to) their consumption and use amongst dependent projects (like a library, an NPM module, or a Go package, or a marketplace GitHub Action)
  • enable platformization by decoupling capability enablement from the use of enabled capabilities
  • can be external or in-house
  • can be sourced from the local file system, an HTTP endpoint, git repositories, or from a Terraform registry
  • rich open source community

For example, a specific version of the cloudpossee/terraform-aws-dynamodb module can be sourced from the public Terraform registry and instantiated within a Terraform root module project configuration:

module "main" {
  source  = "cloudposse/dynamodb/aws"
  version = "0.33.0"

  name              = "my-table"
  namespace         = "eg"
  hash_key          = "HashKey"
  range_key         = "RangeKey"
  enable_autoscaler = false

Workspaces: apply a single root module project against multiple targets

Terraform workspaces facilitate the ability to apply a single Terraform root module project configuration against multiple target contexts; each workspace gets a corresponding distinct and isolated Terraform state, but uses the same underlying HCL project declaration codified in *.tf files.

graph LR;
  subgraph production[prod AWS account]

  subgraph staging[staging AWS account]

  subgraph dev[dev AWS account]

  subgraph tfstate[TF state AWS account]

  A[TF project w/ single backend config declaration]-->|apply workspace_1|dev-us-east-1-state;
  A-->|apply workspace_2|staging-us-east-1-state;
  A-->|apply workspace_3|staging-us-west-2-state;
  A-->|apply workspace_4|prod-us-east-1-state;
  A-->|apply workspace_5|prod-us-west-2-state;


  • create multiple, logical groupings of resources – each associated with its own, independent Terraform state and name – from a single Terraform configuration
  • common use cases: apply the same project configuration multiple times against multiple named environments, such as dev, staging, and prod. Or: apply the same project configuration multiple times against multiple AWS regions, such as us-east-1, us-west-2, etc.
  • See Scalable Terraform patterns: compound workspace names and Using Terraform workspaces for more details.

For example, consider a simple root module project with a single terraform backend configuration:

terraform {
  # Terraform automatically saves each workspace's state to a distinct,
  # workspace-specific object path:
  # s3://${BUCKET}/env:/${terraform.workspace}/${KEY}
  # If no workspace is specified, Terraform uses the 'default' workspace and saves
  # the state to:
  # s3://${BUCKET}/${KEY}
  backend "s3" {
    bucket = "tf-state"
    key    = "terraform.tfstate"

resource "some_resource" "resource" {
  name = terraform.workspace

The project can be applied against multiple named workspace targets; each workspace’s Terraform state is persisted to a distinct S3 object path, isolating workspace operations:

terraform workspace select -or-create "foo"
terraform plan
terraform apply
terraform workspace select -or-create "bar"
terraform plan
terraform apply

Built-in constructs: Enable HCL DRY-ness and logic within a root or child module

Additionally, Terraform offers various built-in constructs for authoring elegant, DRY HCL and expressing logic within a Terraform configuration. These constructs enable reasonably minimal Terraform HCL to manage a large volume of similar (same-ish?) resources:

graph LR;
  subgraph cloud-provider[cloud provider]

  A[TF project]-->|apply|state;


As a contrived example, consider the following Terraform configuration; leveraging some of the above-described contexts, relatively minimal HCL can scales to manage an infinite number of resources.

locals {
  # grafana_dashboards reads a YAML file encoding a list of desired dashboard names
  # into a local variable.
  grafana_dashboards = yamldecode(file("${path.module}/dashboards.yaml"))

  # grafana_folders is a list of unique folder names.
  grafana_folders = distinct([
    for dashboard in local.grafana_dashboards : dashboard.folder

# grafana_folder.all creates a Grafana folder for each grafana_folders item.
resource "grafana_folder" "all" {
  for_each = toset(local.grafana_folders)

  title = each.value

# grafana_dashboard.all creates a Grafana dashboard for each grafana_dashboards item.
resource "grafana_dashboard" "all" {
  for_each = { for dashboard in local.grafana_dashboards : dashboard.dashboard => dashboard }

  folder      = grafana_folder.all[each.value.folder].id
  config_json = jsonencode({
    title = each.value.dashboard,
    uid   = replace(lower(each.value.dashboard), "_", "-")