In this tutorial, you will accomplish three things using Terraform Cloud run triggers:
- Deploy a Kubernetes cluster on Google Cloud.
- Deploy Consul on the Kubernetes cluster using a Helm chart
- Deploy Vault (configured to use a Consul backend) on the Kubernetes cluster using a Helm chart.
This tutorial highlights Terraform and Terraform Cloud (TFC) best practices for code management and modules.
The Terraform configuration for each resource (Kubernetes, Consul, and Vault) are modularized and committed to their respective version control system repositories. First, you will create and configure TFC workspaces for each resource, then link them together using run triggers.
Initially, the Kubernetes cluster will be provisioned with 3 nodes. When the enable_consul_and_vault
variable is set to true
, the Kubernetes cluster will scale to 5 nodes and trigger the Consul and Vault run triggers to deploy Consul and Vault.
»Prerequisites
This tutorial assumes that you are familiar with the standard Terraform workflow, Terraform Cloud, run triggers and provisioning a Kubernetes cluster using Terraform.
If you are unfamiliar with any of these topics, reference their respective tutorials.
- Terraform Workflow — All Get Started tutorials
- Terraform Cloud — All Get Started with Terraform Cloud tutorials
- Run Triggers — Connect Workspaces with Run Triggers
- Provision GKE cluster using Terraform — Provision a GKE Cluster (Google Cloud)
For this tutorial, you will need:
- a Terraform Cloud or Terraform Enterprise account
- a Google Cloud account with access to Compute Admin and GKE Admin
- a GitHub account
If you don’t have your GCP credentials as a JSON or your credentials don’t have access to Compute Admin and GKE Admin, reference the GCP Documentation to generate a new service account and with the right permissions.
If you are using a Google Cloud service account, your account must be assigned the Service Account User role.
Note: There may be some charges associated with running this configuration. Please reference the Google Cloud pricing guide for more details. Instructions to remove the infrastructure you create can be found at the end of this tutorial.
An interactive tutorial is also available to complete the steps described in this tutorial. Launch it here.
»Create Kubernetes workspace
Fork the Learn Terraform Pipelines K8s repository. Update the organization and workspace values in main.tf
to point to your organization and your workspace name — the default organization is hashicorp-learn
and workspace is learn-terraform-pipelines-k8s
. This is where the Terraform remote backend and Google provider are defined.
# main.tf
terraform {
backend "remote" {
organization = "hashicorp-learn"
workspaces {
name = "learn-terraform-pipelines-k8s"
}
}
}
Then, create a Terraform Cloud workspace connected to your forked repository. Terraform Cloud will confirm that the workspace configuration uploaded successfully.
»Configure variables
Click on "Configure variables" then specify the variables required for this deployment.
Set the variables declared in variables.tf
in Terraform Variables.
- region — GCP region to deploy clusters
Set this to a valid GCP region likeus-central1
. For a full list of GCP regions, refer to Google’s Region and Zones documentation. - cluster_name — Name of Kubernetes cluster
Set this totfc-pipelines
. - google_project — Google Project to deploy cluster
This must already exist. Find it in your Google Cloud Platform console. - username — Username for Kubernetes cluster
This can be anything; Terraform will set your username to this value when it creates the Kubernetes cluster. - password — Password for Kubernetes cluster
Mark as sensitive. This can be anything over 16 characters. Terraform will set this when it creates your Kubernetes cluster and will distribute it as necessary when creating your Consul and Vault clusters. You do not need to manually input this value again. - enable_consul_and_vault — Enable Consul and Vault for the secrets cluster
This must be set to false. This variable dictates whether Consul and Vault should be deployed on your Kubernetes cluster.
The variables configuration will look like this.
Then, set your GOOGLE_CREDENTIALS
as a sensitive environment variable.
- GOOGLE_CREDENTIALS — Flattened JSON of your GCP credentials.
Mark as sensitive. This key must have access to both Compute Admin and GKE Admin.
You must flatten the JSON (remove newlines) before pasting it into Terraform Cloud. The command below flattens the JSON using jq.
$ cat <key file>.json | jq -c
You have successfully configured your Kubernetes workspace. Terraform cloud will use these values to deploy your Kubernetes cluster. The pipeline will output the Kubernetes credentials for the Helm charts to consume in the Consul and Vault workspaces. These values are specified in output.tf
.
»Create Consul workspace
Fork the Learn Terraform Pipelines Consul repository. Update the organization
and workspaces
values in main.tf
to point to your organization and your workspace name — learn-terraform-pipelines-consul
.
# main.tf
terraform {
backend "remote" {
organization = "hashicorp-learn"
workspaces {
name = "learn-terraform-pipelines-consul"
}
}
}
The main.tf
file contains the configuration for the Terraform remote backend, Terraform remote state (to retrieve values from the Kubernetes workspace), Kubernetes provider and Helm provider.
Then, create a Terraform Cloud workspace connected to your forked repository. The UI will confirm that the workspace configuration uploaded successfully.
»Configure variables
Click on "Configure variables" then specify the variables required for this deployment.
Set the variables declared in the variables.tf
in Terraform Variables. cluster_workspace
and organization
should point to your Kubernetes cluster workspace and organization. Do not set a value for replicas
.replicas
will be automatically set by the run triggers.
- release_name — Helm Release name for Consul chart
Set this tohashicorp-learn
. Your Vault pods will start with this release name. - namespace — Kubernetes Namespace to deploy the Consul Helm chart
Set this tohashicorp-learn
. You will use this to access your Consul and Vault instances later. - cluster_workspace — Workspace that created the Kubernetes cluster
If you didn't customize your workspace name, this islearn-terraform-pipelines-k8s
. - organization - Organization of workspace that created the Kubernetes cluster
Set this to your Terraform Cloud Organization.
The variables configured in the UI will look like this.
»Configure workspace version control
Click on "Settings" then select "Version Control".
Check the "Include submodules on clone" box under the workspace's VCS settings, then click "Update VCS Settings". This will use the Helm chart referenced in the submodule because we don't have a Helm chart repository.
»Enable run trigger
Click on "Settings" then select "Run Triggers".
Under "Source Workspaces", select your Kubernetes workspace (learn-terraform-pipelines-k8s
) then click "Add Workspace".
You have successfully configured your Consul workspace. The pipeline will retrieve the Kubernetes credentials from the Kubernetes workspace to authenticate to the Kubernetes and Helm provider.
»Create Vault workspace
Fork the Learn Terraform Pipelines Vault repository. Update the organization
and workspace
values in main.tf
to point to your organization and your workspace name (learn-terraform-pipelines-vault
).
# main.tf
terraform {
backend "remote" {
organization = "hashicorp-learn"
workspaces {
name = "learn-terraform-pipelines-vault"
}
}
}
The main.tf
file contains the configuration for the Terraform remote backend, Terraform remote state (to retrieve values from the Kubernetes and Consul workspaces), and Helm provider.
Then, create a Terraform Cloud workspace connected to your forked repository. Terraform Cloud will confirm that the configuration uploaded successfully.
»Configure variables
Click on "Configure variables" then specify the variables required for this deployment.
Set the variables declared in variables.tf
in Terraform Variables. consul_workspace
and cluster_workspace
should point to their respective workspaces. organization
should point to your organization.
- consul_workspace —Terraform Cloud Workspace for the Consul cluster.
If you didn't customize your workspace name, this is
learn-terraform-pipelines-consul
. - cluster_workspace — Terraform Cloud Workspace for the Kubernetes cluster.
If you didn't customize your workspace name, this is
learn-terraform-pipelines-k8s
. - organization — Organization of workspace that created the Kubernetes cluster Set this to your Terraform Cloud Organization.
The configured variables in the UI will look like this.
»Configure workspace version control
Click on "Settings" then select "Version Control".
Check the "Include submodules on clone" box under the workspace's VCS settings, then click "Update VCS Settings". This will use the Helm chart referenced in the submodule because we don't have a Helm chart repository.
»Enable run trigger
Click on "Settings" then select "Run Triggers".
Under "Source Workspaces", select your Consul workspace (learn-terraform-pipelines-consul
) then click "Add Workspace.
You have successfully configured your Vault workspace. The pipeline will retrieve the Kubernetes credentials from the Kubernetes workspace to authenticate to the Helm provider; the pipeline will retrieve the Helm release name and Kubernetes namespace from the Consul workspace.
»Deploy Kubernetes cluster
Now that you have successfully configured all three workspaces (Kubernetes, Consul, and Vault), you can deploy your Kubernetes cluster.
Select your Kubernetes workspace and click "Queue Plan". If the plan is successful, Terraform cloud will display a notice that a run will automatically queue a plan in the Consul workspace, and ask you to confirm and apply.
Notice that a plan for the learn-terraform-pipelines-consul
workspace will be automatically queued once the apply completes. However, since enable_consul_and_vault
is set to false
, the Kubernetes cluster will be deployed with 3 nodes.
Click "Confirm & Apply" to apply this configuration. This process should take about 10 minutes to complete.
Once the apply has been completed, verify your Kubernetes cluster is provisioned by visiting the GKE Console Page. Your Kubernetes cluster should only have three nodes and no workloads.
»Enable Consul and Vault
Now that your Kubernetes cluster has been provisioned, you will deploy Consul and Vault on your cluster.
Navigate to your Kubernetes workspace. Click on "Configure variables" then set the Terraform variable enable_consul_and_vault
to true
.
Click "Queue plan". In the run plan, note that the cluster will scale from 3 to 5 nodes. Click "Confirm & Apply" to scale your cluster.
This process should take about 2 minutes to complete.
Notice that a plan for the learn-terraform-pipelines-consul
workspace will be automatically queued once the apply completes.
»Deploy Consul
Navigate to the Consul workspace, view the run plan, then click "Confirm & Apply". This will deploy Consul onto your cluster using the Helm provider. The plan retrieves the Kubernetes cluster authentication information from the Kubernetes workspace to configure both the Kubernetes and Helm provider.
This process will take about 2 minutes to complete.
Notice that a plan for the learn-terraform-pipelines-vault
workspace will be automatically queued once the apply completes.
»Deploy Vault
Navigate to the Vault workspace, view the run plan, then click "Confirm & Apply". This will deploy Vault onto your cluster using the Helm provider and configure it to use Consul as the backend. The plan retrieves the Kubernetes namespace from the Consul workspace’s remote state and deploys Vault to the same workspace.
This process will take about 2 minutes to complete.
»Verify Consul and Vault deployments
Once the apply has been completed, verify by visiting the GKE Console. Your Kubernetes cluster should also have 5 nodes. Navigate to "Workloads", and notice that Consul and Vault have been deployed.
Notice that the Vault pods have warnings because Vault is sealed. You will have the option to unseal Vault and resolve the warnings once you enable port forwarding.
Verify that Consul and Vault have both been deployed by viewing their respective dashboard.
First, activate your Cloud Shell (button on top right).
Run the following command in Cloud Shell to configure it to access your Kubernetes cluster. Replace PROJECT_NAME
with your Google Cloud project name. If you didn't use the default values, replace tfc-pipelines
with your Kubernetes cluster name, us-central1-a
with your zone.
$ gcloud container clusters get-credentials tfc-pipelines --zone us-central1-a --project PROJECT_NAME
Run the following command — it forwards port :8500
(Consul UI) to port :8080
, allowing you to access it in the Web Preview. Replace hashicorp-learn
with your Kubernetes namespace.
$ kubectl port-forward -n hashicorp-learn consul-server-0 8080:8500
After you run this command, open your Web Preview to port :8080
to view the Consul UI.
Note: The Consul UI does not show Vault in the list of services because its service_registration
stanza in the Helm chart defaults to Kubernetes. However, Vault is still configured to use Consul as a backend.
Congratulations — you have successfully completed the tutorial and applied some Terraform Cloud best practices. By keeping your infrastructure configuration modular and integrating workspaces together using run triggers, your Terraform configuration becomes extensible and easier to understand.
»Clean up resources
To clean up the resources and destroy the infrastructure you have provisioned in this tutorial, go to each workspace in the reverse order you created them in, queue a destroy plan, and apply it. Then, delete the workspace from Terraform Cloud. Destroy and delete your workspaces in the following order:
- Vault workspace
- Consul workspace
- Kubernetes workspace
For a more detailed tutorial on destroying resources on Terraform Cloud, reference the Clean up Cloud Resources tutorial.
»Next steps
To watch a video of a demo similar to this tutorial, reference the Infrastructure Pipelines with Terraform Cloud webinar.
To learn how to get started with Consul Service Mesh, visit the Getting Started with Consul Service Mesh Learn track.
To learn how to leverage Vault features on Kubernetes, visit the Kubernetes Learn Vault track.
To learn how to use the TFE provider to deploy the Terraform Cloud resources used in this guide, visit the Use the TFE Provider to Manage Terraform Cloud Workspaces Learn guide.