Workshops
Book a 90-minute product workshop led by HashiCorp engineers and product experts during HashiConf Digital Reserve your spot

Manage Kubernetes with Terraform

Deploy Consul and Vault on Kubernetes with Run Triggers

In this scenario, you will accomplish three things using Terraform Cloud run triggers.

  1. Deploy a Kubernetes cluster on Google Cloud.
  2. Deploy Consul on the Kubernetes cluster using a Helm chart
  3. Deploy Vault (configured to use a Consul backend) on the Kubernetes cluster using a Helm chart.

This scenario highlights Terraform and Terraform Cloud (TFC) best practices for code management and modules.

The Terraform configuration for each resource (Kubernetes, Consul, and Vault) are modularized and committed to their respective version control system repositories. First, you will create and configure TFC workspaces for each resource, then link them together using run triggers.

Initially, the Kubernetes cluster will be provisioned with 3 nodes. When the enable_consul_and_vault variable is set to true, the Kubernetes cluster will scale to 5 nodes and trigger the Consul and Vault run triggers to deploy Consul and Vault.

Terraform Cloud Workflow of the scenario. The Kubernetes workspace will trigger the Consul workspace, which in turn will trigger the Vault Workspace.

»Prerequisites

This scenario assumes that you are familiar with the standard Terraform workflow, Terraform Cloud, run triggers and provisioning a Kubernetes cluster using Terraform.

If you are unfamiliar with any of these topics, reference their respective guides.

For this guide, you will need:

  1. a Terraform Cloud or Terraform Enterprise account
  2. a Google Cloud account with access to Compute Admin and GKE Admin
  3. a GitHub account

If you don’t have your GCP credentials as a JSON or your credentials don’t have access to Compute Admin and GKE Admin, reference the GCP Documentation to generate a new service account and with the right permissions.

If you are using a Google Cloud service account, your account must be assigned the Service Account User role.

»Create Kubernetes workspace

Fork the Learn Terraform Pipelines K8s repository. Update the organization and workspace values in main.tf to point to your organization and your workspace name — the default organization is hashicorp-learn and workspace is learn-terraform-pipelines-k8s. This is where the Terraform remote backend and Google provider are defined.

# main.tf
terraform {
  backend "remote" {
    organization = "hashicorp-learn"

    workspaces {
      name = "learn-terraform-pipelines-k8s"
    }
  }
}

Then, create a Terraform Cloud workspace connected to your forked repository. Terraform Cloud will confirm that the workspace configuration uploaded successfully.

Terraform Cloud Kubernetes Workspace Runs dashboard

»Configure variables

Click on "Configure variables" then specify the variables required for this deployment.

Set the variables declared in variables.tf in Terraform Variables.

  • region — GCP region to deploy clusters
    Set this to a valid GCP region like us-central1. For a full list of GCP regions, refer to Google’s Region and Zones documentation.
  • cluster_name — Name of Kubernetes cluster
    Set this to tfc-pipelines.
  • google_project — Google Project to deploy cluster
    This must already exist. Find it in your Google Cloud Platform console.
  • username — Username for Kubernetes cluster
    This can be anything; Terraform will set your username to this value when it creates the Kubernetes cluster.
  • password — Password for Kubernetes cluster
    Mark as sensitive. This can be anything over 16 characters. Terraform will set this when it creates your Kubernetes cluster and will distribute it as necessary when creating your Consul and Vault clusters. You do not need to manually input this value again.
  • enable_consul_and_vault — Enable Consul and Vault for the secrets cluster
    This must be set to false. This variable dictates whether Consul and Vault should be deployed on your Kubernetes cluster.

The variables configuration will look like this.

Terraform Cloud Kubernetes Workspace fully configured Terraform variables

Then, set your GOOGLE_CREDENTIALS as a sensitive environment variable.

  • GOOGLE_CREDENTIALS — Flattened JSON of your GCP credentials.
    Mark as sensitive. This key must have access to both Compute Admin and GKE Admin.

You must flatten the JSON (remove newlines) before pasting it into Terraform Cloud. The command below flattens the JSON using jq.

cat <key file>.json | jq -c

You have successfully configured your Kubernetes workspace. Terraform cloud will use these values to deploy your Kubernetes cluster. The pipeline will output the Kubernetes credentials for the Helm charts to consume in the Consul and Vault workspaces. These values are specified in output.tf.

»Create Consul workspace

Fork the Learn Terraform Pipelines Consul repository. Update the organization and workspaces values in main.tf to point to your organization and your workspace name — learn-terraform-pipelines-consul.

# main.tf
terraform {
  backend "remote" {
    organization = "hashicorp-learn"

    workspaces {
      name = "learn-terraform-pipelines-consul"
    }
  }
}

The main.tf file contains the configuration for the Terraform remote backend, Terraform remote state (to retrieve values from the Kubernetes workspace), Kubernetes provider and Helm provider.

Then, create a Terraform Cloud workspace connected to your forked repository. The UI will confirm that the workspace configuration uploaded successfully.

Terraform Cloud Consul Workspace Runs dashboard

»Configure variables

Click on "Configure variables" then specify the variables required for this deployment.

Set the variables declared in the variables.tf in Terraform Variables. cluster_workspace and organization should point to your Kubernetes cluster workspace and organization. Do not set a value for replicas.replicas will be automatically set by the run triggers.

  • release_name — Helm Release name for Consul chart
    Set this to hashicorp-learn. Your Vault pods will start with this release name.
  • namespace — Kubernetes Namespace to deploy the Consul Helm chart
    Set this to hashicorp-learn. You will use this to access your Consul and Vault instances later.
  • cluster_workspace — Workspace that created the Kubernetes cluster
    If you didn't customize your workspace name, this is learn-terraform-pipelines-k8s.
  • organization - Organization of workspace that created the Kubernetes cluster
    Set this to your Terraform Cloud Organization.

The variables configured in the UI will look like this.

Terraform Cloud Consul Workspace fully configured Terraform variables

»Configure workspace version control

Click on "Settings" then select "Version Control".

Check the "Include submodules on clone" box under the workspace's VCS settings, then click "Update VCS Settings". This will use the Helm chart referenced in the submodule because we don't have a Helm chart repository.

Terraform Cloud Kubernetes Workspace fully configured version control version. Include submodules on clone option has been ticked.

»Enable run trigger

Click on "Settings" then select "Run Triggers".

Under "Source Workspaces", select your Kubernetes workspace (learn-terraform-pipelines-k8s) then click "Add Workspace".

Terraform Cloud Consul Workspace enable run trigger to point to Kubernetes Workspace

You have successfully configured your Consul workspace. The pipeline will retrieve the Kubernetes credentials from the Kubernetes workspace to authenticate to the Kubernetes and Helm provider.

»Create Vault workspace

Fork the Learn Terraform Pipelines Vault repository. Update the organization and workspace values in main.tf to point to your organization and your workspace name (learn-terraform-pipelines-vault).

# main.tf
terraform {
  backend "remote" {
    organization = "hashicorp-learn"

    workspaces {
      name = "learn-terraform-pipelines-vault"
    }
  }
}

The main.tf file contains the configuration for the Terraform remote backend, Terraform remote state (to retrieve values from the Kubernetes and Consul workspaces), and Helm provider.

Then, create a Terraform Cloud workspace connected to your forked repository. Terraform Cloud will confirm that the configuration uploaded successfully.

Terraform Cloud Vault Workspace Runs dashboard

»Configure variables

Click on "Configure variables" then specify the variables required for this deployment.

Set the variables declared in variables.tf in Terraform Variables. consul_workspace and cluster_workspace should point to their respective workspaces. organization should point to your organization.

  • consul_workspace —Terraform Cloud Workspace for the Consul cluster. If you didn't customize your workspace name, this is learn-terraform-pipelines-consul.
  • cluster_workspace — Terraform Cloud Workspace for the Kubernetes cluster. If you didn't customize your workspace name, this is learn-terraform-pipelines-k8s.
  • organization — Organization of workspace that created the Kubernetes cluster Set this to your Terraform Cloud Organization.

The configured variables in the UI will look like this.

Terraform Cloud Vault Workspace fully configured Terraform variables

»Configure workspace version control

Click on "Settings" then select "Version Control".

Check the "Include submodules on clone" box under the workspace's VCS settings, then click "Update VCS Settings". This will use the Helm chart referenced in the submodule because we don't have a Helm chart repository.

»Enable run trigger

Click on "Settings" then select "Run Triggers".

Under "Source Workspaces", select your Consul workspace (learn-terraform-pipelines-consul) then click "Add Workspace.

You have successfully configured your Vault workspace. The pipeline will retrieve the Kubernetes credentials from the Kubernetes workspace to authenticate to the Helm provider; the pipeline will retrieve the Helm release name and Kubernetes namespace from the Consul workspace.

»Deploy Kubernetes cluster

Now that you have successfully configured all three workspaces (Kubernetes, Consul, and Vault), you can deploy your Kubernetes cluster.

Select your Kubernetes workspace and click "Queue Plan". If the plan is successful, Terraform cloud will display a notice that a run will automatically queue a plan in the Consul workspace, and ask you to confirm and apply.

Terraform Cloud Kubernetes Workspace queued run. `learn-terraform-pipelines-consul` workspace will be listed under the run plan as a run trigger.

Notice that a plan for the learn-terraform-pipelines-consul workspace will be automatically queued once the apply completes. However, since enable_consul_and_vault is set to false, the Kubernetes cluster will be deployed with 3 nodes.

Click "Confirm & Apply" to apply this configuration. This process should take about 10 minutes to complete.

Once the apply has been completed, verify your Kubernetes cluster is provisioned by visiting the GKE Console Page. Your Kubernetes cluster should only have three nodes and no workloads.

Google Cloud Kubernetes dashboard showing a GKE cluster named `tfc-pipelines` with 3 nodes

»Enable Consul and Vault

Now that your Kubernetes cluster has been provisioned, you will deploy Consul and Vault on your cluster.

Navigate to your Kubernetes workspace. Click on "Configure variables" then set the Terraform variable enable_consul_and_vault to true.

Click "Queue plan". In the run plan, note that the cluster will scale from 3 to 5 nodes. Click "Confirm & Apply" to scale your cluster.

This process should take about 2 minutes to complete.

Notice that a plan for the learn-terraform-pipelines-consul workspace will be automatically queued once the apply completes.

Terraform Cloud Kubernetes Workspace queued run with enable_consul_and_vault set to true. `learn-terraform-pipelines-consul` workspace will be listed under the run plan as a run trigger.

»Deploy Consul

Navigate to the Consul workspace, view the run plan, then click "Confirm & Apply". This will deploy Consul onto your cluster using the Helm provider. The plan retrieves the Kubernetes cluster authentication information from the Kubernetes workspace to configure both the Kubernetes and Helm provider.

This process will take about 2 minutes to complete.

Notice that a plan for the learn-terraform-pipelines-vault workspace will be automatically queued once the apply completes.

Terraform Cloud Consul Workspace queued run. `learn-terraform-pipelines-vault` workspace will be listed under the run plan as a run trigger.

»Deploy Vault

Navigate to the Vault workspace, view the run plan, then click "Confirm & Apply". This will deploy Vault onto your cluster using the Helm provider and configure it to use Consul as the backend. The plan retrieves the Kubernetes namespace from the Consul workspace’s remote state and deploys Vault to the same workspace.

This process will take about 2 minutes to complete.

Terraform Cloud Vault Workspace queued run.

»Verify Consul and Vault deployments

Once the apply has been completed, verify by visiting the GKE Console. Your Kubernetes cluster should also have 5 nodes. Navigate to "Workloads", and notice that Consul and Vault have been deployed.

Notice that the Vault pods have warnings because Vault is sealed. You will have the option to unseal Vault and resolve the warnings once you enable port forwarding.

Google Cloud Kubernetes dashboard showing a GKE cluster named `tfc-pipelines` with 5 nodes

Verify that Consul and Vault have both been deployed by viewing their respective dashboard.

First, activate your Cloud Shell (button on top right).

Google Cloud header with "Cloud Shell" option pointed out (top right corner near the help button)

Run the following command in Cloud Shell to configure it to access your Kubernetes cluster. Replace PROJECT_NAME with your Google Cloud project name. If you didn't use the default values, replace tfc-pipelines with your Kubernetes cluster name, us-central1-a with your zone.

$ gcloud container clusters get-credentials tfc-pipelines --zone us-central1-a   --project PROJECT_NAME

Run the following command — it forwards port :8500 (Consul UI) to port :8080, allowing you to access it in the Web Preview. Replace hashicorp-learn with your Kubernetes namespace.

$ kubectl port-forward -n hashicorp-learn consul-server-0 8080:8500

After you run this command, open your Web Preview to port :8080 to view the Consul UI.

Google Cloud with Web Preview option pointed out (top right corner of the "Cloud Shell")

Entire process to access Consul UI using Cloud Shell (running on port :8080)

Congratulations — you have successfully completed the scenario and applied some Terraform Cloud best practices. By keeping your infrastructure configuration modular and integrating workspaces together using run triggers, your Terraform configuration becomes extensible and easier to understand.

»Clean up resources

To clean up the resources and destroy the infrastructure you have provisioned in this track, go to each workspace in the reverse order you created them in, queue a destroy plan, and apply it. Then, delete the workspace from Terraform Cloud. Destroy and delete your workspaces in the following order:

  1. Vault workspace
  2. Consul workspace
  3. Kubernetes workspace

For a more detailed guide on destroying resources on Terraform Cloud, reference the Clean up Cloud Resources guide.

»Next steps

To watch a video of a demo similar to this guide, reference the Infrastructure Pipelines with Terraform Cloud webinar.

To learn how to get started with Consul Service Mesh, visit the Getting Started with Consul Service Mesh Learn track.

To learn how to leverage Vault features on Kubernetes, visit the Kubernetes Learn Vault track.