Virtual Event
Join us for the next HashiConf Digital October 12-15, 2020 Register for Free

Manage Kubernetes with Terraform

Manage Kubernetes Resources via Terraform

Kubernetes (K8S) is an open-source workload scheduler with focus on containerized applications. You can use the Terraform Kubernetes provider to interact with resources supported by Kubernetes.

In this guide, you will learn how to interact with Kubernetes using Terraform, by scheduling and exposing a NGINX deployment on a Kubernetes cluster.

The final Terraform configuration files used in this guide can be found in the Deploy NGINX on Kubernetes via Terraform GitHub repository.

»Why deploy with Terraform?

While you could use kubectl or similar CLI-based tools to manage your Kubernetes resources, using Terraform has the following benefits:

  • Unified Workflow - If you are already provisioning Kubernetes clusters with Terraform, use the same configuration language to deploy your applications into your cluster.

  • Full Lifecycle Management - Terraform doesn't only create resources, it updates, and deletes tracked resources without requiring you to inspect the API to identify those resources.

  • Graph of Relationships - Terraform understands dependency relationships between resources. For example, if a Persistent Volume Claim claims space from a particular Persistent Volume, Terraform won't attempt to create the claim if it fails to create the volume.

»Prerequisites

The guide assumes some basic familiarity with Kubernetes and kubectl.

It also assumes that you are familiar with the usual Terraform plan/apply workflow. If you're new to Terraform itself, refer first to the Getting Started guide.

For this guide, you will need an existing Kubernetes cluster. If you don't have a Kubernetes cluster, you can use kind to provision a local Kubernetes cluster or provision one on a cloud provider.

Follow these instructions or choose a package manager based on your operating system to install kind.

Use the package manager homebrew to install kind.

$ brew install kind

Once you've done this, download and save the kind configuration into a file named kind-config.yaml. This configuration adds extra port mappings, so you can access the NGINX service locally later.

$ curl https://raw.githubusercontent.com/hashicorp/learn-terraform-deploy-nginx-kubernetes-provider/master/kind-config.yaml --output kind-config.yaml

Then, create a kind Kubernetes cluster.

$ kind create cluster --name terraform-learn --config kind-config.yaml
Creating cluster "terraform-learn" ...
 ✓ Ensuring node image (kindest/node:v1.17.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-terraform-learn"
You can now use your cluster with:

kubectl cluster-info --context kind-terraform-learn

Have a nice day! 👋

Verify that your cluster exists by listing your kind clusters.

$ kind get clusters
terraform-learn

Then, point kubectl to interact with this cluster. The context is kind- followed by the name of your cluster.

$ kubectl cluster-info --context kind-terraform-learn
Kubernetes master is running at https://127.0.0.1:32769
KubeDNS is running at https://127.0.0.1:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

»Configure the provider

Before you can schedule any Kubernetes services using Terraform, you need to configure the Terraform Kubernetes provider.

For this guide it's easiest to rely on kubectl's context, no matter how you provisioned your cluster. The provider automatically uses kubectl's current context.

To see how you would manually configure the provider to target a specific Kubernetes cluster, follow the instructions in the kind or cloud provider tabs. This is useful if you're managing multiple Kubernetes clusters and kubetcl contexts — you avoid accidentally applying a configuration meant for a different cluster.

Create a directory named learn-terraform-deploy-nginx-kubernetes and navigate into it.

$ mkdir learn-terraform-deploy-nginx-kubernetes
$ cd learn-terraform-deploy-nginx-kubernetes

Then, create a new file named kubernetes.tf and add the following configuration to it.

provider "kubernetes" {}

The provider block can be completely empty because the Terraform provider is using the credentials from kubectl. This is the most straight forward way to configure the Terraform Kubernetes provider.

Verify kubectl's current-context is pointing to your Kubernetes cluster. If you're running kind, your current-context should be kind-terraform-learn.

$ kubectl config current-context
kind-terraform-learn

If it is something else, change it to point to your Kubernetes cluster. If you're running kind, your current-context should be kind-terraform-learn.

$ kubectl config use-context kind-terraform-learn
Switched to context "kind-terraform-learn".

After configuring the provider, run terraform init to download the latest version and initialize your Terraform workspace.

$ terraform init

»Schedule a deployment

Add the following to your kubernetes.tf file. This Terraform configuration will schedule a NGINX deployment with two replicas on your Kubernetes cluster, internally exposing port 80 (HTTP).

resource "kubernetes_deployment" "nginx" {
  metadata {
    name = "scalable-nginx-example"
    labels = {
      App = "ScalableNginxExample"
    }
  }

  spec {
    replicas = 2
    selector {
      match_labels = {
        App = "ScalableNginxExample"
      }
    }
    template {
      metadata {
        labels = {
          App = "ScalableNginxExample"
        }
      }
      spec {
        container {
          image = "nginx:1.7.8"
          name  = "example"

          port {
            container_port = 80
          }

          resources {
            limits {
              cpu    = "0.5"
              memory = "512Mi"
            }
            requests {
              cpu    = "250m"
              memory = "50Mi"
            }
          }
        }
      }
    }
  }
}

You may notice the similarities between the Terraform configuration and Kubernetes configuration YAML file.

Apply the configuration to schedule the NGINX deployment.

$ terraform apply

Once the apply is complete, verify the NGINX deployment is running.

$ kubectl get deployments
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
scalable-nginx-example   2/2     2            2           15s

»Schedule a Service

There are multiple Kubernetes services you can use to expose your NGINX to users.

If your Kubernetes cluster is hosted locally on kind, you will expose your NGINX instance via NodePort to access your instance. This exposes the service on each node’s IP at a static port, allowing you to access the service from outside the cluster at <NodeIP>:<NodePort>.

If your Kubernetes cluster is hosted on a cloud provider, you will expose your NGINX instance via LoadBalancer to access your instance. This exposes the service externally using a cloud provider’s load balancer.

Notice how the Kubernetes Service resource block dynamically assigns the selector to the Deployment’s label. This avoids common bugs due to mismatched service label selectors.

Add the following configuration to your kubernetes.tf file. This will expose the NGINX instance at the node_port30201.

resource "kubernetes_service" "nginx" {
  metadata {
    name = "nginx-example"
  }
  spec {
    selector = {
      App = kubernetes_deployment.nginx.spec.0.template.0.metadata[0].labels.App
    }
    port {
      node_port   = 30201
      port        = 80
      target_port = 80
    }

    type = "NodePort"
  }
}

Apply the configuration to schedule the NodePort Service.

$ terraform apply

Once the apply is complete, verify the NGINX service is running.

$ kubectl get services
NAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.96.0.1     <none>        443/TCP        2m53s
nginx-example   NodePort    10.96.55.64   <none>        80:30201/TCP   76s

You can access the NGINX instance by navigating to the NodePort at http://localhost:30201/.

»Scale the deployment

You can scale your deployment by increasing the replicas field in your configuration. Change the number of replicas in your Kubernetes deployment from 2 to 4.

resource "kubernetes_deployment" "nginx" {
  # ...

  spec {
    replicas = 4

    # ...
  }

  # ...
}

Apply the change to scale your deployment.

$ terraform apply

Once the apply is complete, verify the NGINX deployment has four replicas.

$ kubectl get deployments
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
scalable-nginx-example   4/4     4            4           4m48s

»Clean up your workspace

Remember to destroy any resources you created once you're done with this guide.

Running terraform destroy will de-provision the NGINX deployment and service you created in this guide.

$ terraform destroy

If you are using a kind Kubernetes cluster, run the following command to delete it.

$ kind delete cluster --name terraform-learn

»Next steps

In this guide, you configured the Terraform Kubernetes provider and used it to schedule, expose and scale an NGINX instance.

To discover additional capabilities, visit the Terraform Kubernetes Provider Registry Documentation Page.

For a more in-depth Kubernetes example, Deploy Consul and Vault on a Kubernetes Cluster using Run Triggers (this guide is GKE based).