Getting Started with Kubernetes

Vault Installation to Minikube via Helm

Running Vault on Kubernetes is generally the same as running it anywhere else. Kubernetes, as a container orchestration engine, eases some of the operational burdens. Helm charts provide the benefit of a refined interface when it comes to deploying Vault in a variety of different modes.

In this guide, you will setup Vault and its dependencies with a Helm chart. Then integrate a web application that uses the Kubernetes service account token to authenticate with Vault and retrieve a secret.

Prerequisites

This guide requires the Kubernetes comand-line interface (CLI) and the Helm CLI installed, Minikube, the Vault and Consul Helm charts for, the sample web application, and additional configuration to bring it all together.

This guide was last tested 24 Oct 2019 on a macOS 10.15 using this configuration:

$ docker version
Client: Docker Engine - Community
  Version:          19.03.4

$ minikube version
minikube version: v1.3.1
commit: ca60a424ce69a4d79f502650199ca2b52f29e631

$ cd consul-helm && git rev-parse HEAD && cd ..
ac210e339c4a4de34b07343db0690374b5d92440

$ cd vault-helm && git rev-parse HEAD && cd ..
09f56da5482096c1e213d5a0f1b1463503def82e

$ helm version
# NOTE: Helm version requires it to be initialized.
#       See the "Initialize Helm" step in this guide
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

Although we recommend these software versions, the output you see may vary depending on your environment and the software versions you use.

First, follow the directions for installing Minikube, including VirtualBox or similar.

Next, install kubectl CLI and helm CLI.

On Mac with Homebrew.

$ brew install kubernetes-cli
$ brew install kubernetes-helm

On Windows with Chocolatey:

$ choco install kubernetes-cli
$ choco install kubernetes-helm

Next, retrieve the web application and additional configuration by cloning the hashicorp/vault-guides repository from GitHub.

$ git clone https://github.com/hashicorp/vault-guides.git

This repository contains supporting content for all of the Vault learn guides. The content specific to this guide can be found within a sub-directory.

Go into the vault-guides/operations/provision-vault/kubernetes/minikube/getting-started directory.

$ cd vault-guides/operations/provision-vault/kubernetes/minikube/getting-started

NOTE This guide assumes that the remainder of commands are executed within this directory.

Next, retrieve the Vault Helm chart and Consul Helm chart.

Clone the Vault Helm chart repository:

$ git clone https://github.com/hashicorp/vault-helm.git

Clone the Consul Helm chart repository:

$ git clone https://github.com/hashicorp/consul-helm.git

Start Minikube

Minikube is a CLI tool that provisions and manages the lifecycle of single-node Kubernetes clusters locally inside Virtual Machines (VM) on your system.

Start a Kubernetes cluster with 4096 Megabyes (MB) of memory:

$ minikube start --memory 4096
😄  minikube v1.3.1 on Darwin 10.15
🔥  Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.15.2 on Docker 18.09.8 ...
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"

The --memory is set to 4096 MB to ensure there is plenty of memory for all the resources to run concurrently. The initialization process takes several minutes as it retrieves any necessary dependencies and executes various container images.

Verify the status of the Minikube cluster:

$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.114

NOTE Even if the previous step completed successfully, you may have to wait a minute or two for Minikube to be available. If you see an error, try again after a few minutes.

The host, kubelet, apiserver report that they are running. The kubectl, a command line interface (CLI) for running commands against Kubernetes cluster, is also configured to communicate with this recently started cluster.

Minikube provides a visual representation of the status in a web-based dashboard. This interface makes it easy to view cluster activity and delve into the issues affecting it.

In another terminal, launch the minikube dashboard:

$ minikube dashboard

The operating system's default browser opens and displays the dashboard.

Minikube Dashboard

Initialize Helm

Helm is a package manager that provides the community the ability to share charts that describe the operation of resources operating within Kubernetes. Helm requires initialization so that it can load an in-cluster component called Tiller. Tiller interacts directly with the Kubernetes API Server to install, upgrade, query, and remove Kubernetes resources.

Initialize Helm and start Tiller:

$ helm init
$HELM_HOME has been configured at /Users/USERNAME/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

Tiller runs as a pod within the kube-system namespace. A pod is a Kubernetes resource that represents one or more cooperating processes that form a cohesive unit of service. Namespaces, another Kubernetes resource, provide a scope for names. The kube-system is the namespace specifically for resources created to support the Kubernetes system.

Verify that Tiller is running by getting all the pods within the kube-system namespace:

$ kubectl get pods --namespace kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-s8cdv                1/1     Running   1          7m17s
coredns-5c98db65d4-vh5tw                1/1     Running   1          7m17s
etcd-minikube                           1/1     Running   0          6m20s
kube-addon-manager-minikube             1/1     Running   0          6m19s
kube-apiserver-minikube                 1/1     Running   0          6m12s
kube-controller-manager-minikube        1/1     Running   0          6m9s
kube-proxy-llgmm                        1/1     Running   0          7m17s
kube-scheduler-minikube                 1/1     Running   0          6m12s
kubernetes-dashboard-7b8ddcb5d6-7gs2l   1/1     Running   0          7m16s
storage-provisioner                     1/1     Running   0          7m16s
tiller-deploy-75f6c87b87-n4db8          1/1     Running   0          21s

The Tiller service appears here as the pod named tiller-deploy-75f6c87b87-n4db8. It reports that it is ready and running.

Verify that Tiller is running, through the dashboard, by getting all the pods within the kube-system namespace:

Minikube dashboard highlighting the "kube-system" namespace

NOTE Is a Tiller pod not showing? Try turning it off and back on again with reset and init subcommands

Install the Consul Helm chart

Consul is a service mesh solution that launches with a key-value store. Vault requries a storage backend to manage its configuration and secrets. The Vault Helm chart defaults to define a Consul storage backend.

The recommended way to run Consul on Kubernetes is via the Helm chart. This installs and configures all the necessary components to run Consul in several different modes. A Helm chart includes templates that enable conditional and parameterized execution. These parameters can be set through command-line arguments or defined in YAML. For Consul to act as a storage backend for Vault requires a minimum of a server and a client.

$ cat helm-consul-values.yaml
global:
  datacenter: vault-kubernetes-guide

client:
  enabled: true

server:
  replicas: 1
  bootstrapExpect: 1
  disruptionBudget:
    enabled: true
    maxUnavailable: 0

Install the Consul Helm chart found in the directory consul-helm, launching pods with names prefixed with consul and apply the values found in helm-consul-values.yaml:

$ helm install --name consul --values helm-consul-values.yaml consul-helm
NAME:   consul
LAST DEPLOYED: Fri Oct  4 14:22:19 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:

...

==> v1/Pod(related)
NAME                    READY  STATUS             RESTARTS  AGE
consul-consul-w769k     0/1    ContainerCreating  0         0s
consul-consul-server-0  0/1    ContainerCreating  0         0s

...

Your release is named consul. To learn more about the release, try:

  $ helm status consul
  $ helm get consul

The installation of the Helm chart displays the namespace, status, and resources created. The server and client pods were deployed in the default namespace because no namespace was specified or configured as the default.

To verify, get all the pods within the default namespace:

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
consul-consul-w769k      1/1     Running   0          37s
consul-consul-server-0   1/1     Running   0          37

The Consul services appears here as the pods prefixed with consul. The server and client report that they are Running and that they ready (1/1).

Minikube dashboard showing Consul pods

For more information refer to the Installing Consul to Minikube via Helm guide

Install the Vault Helm chart

The recommended way to run Vault on Kubernetes is via the Helm chart. This installs and configures all the necessary components to run Vault in several different modes.

Install the Vault Helm chart found in vault-helm launching pods with a name prefixed with vault and with authentication delegation enabled:

$ helm install --name vault --set 'authDelegator.enabled=true' vault-helm
NAME:   vault
LAST DEPLOYED: Mon Oct  7 14:12:02 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:

...

==> v1/Pod(related)
NAME     READY  STATUS   RESTARTS  AGE
vault-0  0/1    Pending  0         0s

...

Your release is named vault. To learn more about the release, try:

  $ helm status vault
  $ helm get vault

The installation of the Helm chart displays the a Vault pod was deployed into the default namespace.

To verify, get all the pods within the default namespace:

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
...
vault-0                  0/1     Running   0          42s

The vault-0 pod reports that it is Running but it is not ready 0/1. The Helm chart defines a readinessProbe that executes a command to check Vault's status.

Retrieve the status of Vault on the vault-0 pod:

$ kubectl exec vault-0 -- vault status
Key                Value
---                -----
Seal Type          shamir
Initialized        false
Sealed             true
Total Shares       0
Threshold          0
Unseal Progress    0/0
Unseal Nonce       n/a
Version            n/a
HA Enabled         false
command terminated with exit code 2

The status command reports that Vault is not initialized and that it is still sealed.

Minikube dashboard showing sealed Vault pod

Reviewing the vault-0 pod in the web interface provides the results of the status command inline.

Initialize and unseal Vault

When Vault is started, it starts uninitialized and sealed state. Prior to initialization the Vault's storage backend, Consul, is not prepared to receive data.

Initialize Vault with one key share and a one key threshold on the vault-0 pod:

$ kubectl exec vault-0 -- vault operator init -key-shares 1 -key-threshold 1

Unseal Key 1: X0MCbyOiBbL5KZwEAJMhJus3a4kq4qVP+i7G0tPCjTs=

Initial Root Token: s.CLD1rHKsJYqZCwa2nm1GTJKy
...

The init command generates a master key that it disassembles into key shares and then sets the number of key shares required to unseal Vault. These key shares appear in the output as unseal keys.

The command also generated an initial root token which is used in the next step to login to Vault.

After initialization, Vault is configured to know where and how to access the physical storage, but doesn't know how to decrypt any of it. Unsealing is the process of constructing the master key necessary to read the decryption key to decrypt the data, allowing access to the Vault.

Unseal Vault on the vault-0 pod with Unseal Key 1:

$ kubectl exec vault-0 -- vault operator unseal X0MCbyOiBbL5KZwEAJMhJus3a4kq4qVP+i7G0tPCjTs=
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.2.2
Cluster Name    vault-cluster-1c91424c
Cluster ID      57f0a494-97b8-59f5-1760-b63f896296a6
HA Enabled      false

The unseal command reports that Vault is initialized and unsealed.

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
...
vault-0                  1/1     Running   0          39m

Minikube dashboard vault-0 pod unsealed

Vault is now ready for you to login with the initial root token.

Login to Vault

Vault requires clients to authenticate first before it allows any further actions. An unsealed Vault starts with the Token Auth Method enabled and generates an initial root token.

While in previous steps you interacted with Vault via kubectl exec. The Vault install on your workstation can target another Vault via the Vault CLI. This is because the Vault CLI is a thin wrapper around the HTTP API that by default sends requests to the address: https://localhost:8200.

This Vault, running in the vault-0 pod, is running an HTTP service that is listening on port 8200. A service running within a pod, listening on a specific port, can be reached locally on your workstation if you instruct Kubernetes to forward ports.

In another terminal, port forward all requests made to http://localhost:8200 to vault-0 pod on port 8200:

$ kubectl port-forward vault-0 8200:8200
Forwarding from localhost:8200 -> 8200
Forwarding from [::1]:8200 -> 8200

The results show the ports that are forwarded. The forwarding session ends when the selected pod terminates or you terminate the process with a SIGINT (ctrl+c).

Next, to configure the Vault CLI locally to communicate via HTTP and not HTTPS requires setting the target Vault's address via an environment variable.

In the original terminal, export an environment variable named VAULT_ADDR set to http://localhost:8200:

$ export VAULT_ADDR="http://localhost:8200"

Verify that the port-forwarding and environment variable are configured correctly by requesting the status:

$ vault status
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.2.2
Cluster Name    vault-cluster-1c91424c
Cluster ID      57f0a494-97b8-59f5-1760-b63f896296a6
HA Enabled      false

This is the same status report that you reached through executing kubectl exec vault-0 -- vault status command.

Finally, login with the initial root token that was displayed when you initialized Vault:

$ vault login s.CLD1rHKsJYqZCwa2nm1GTJKy
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests automatically use this token.

Key                  Value
---                  -----
token                s.CLD1rHKsJYqZCwa2nm1GTJKy
token_accessor       aRD2O9GuNDG0Ex9X1MxZjhhO
token_duration       ∞
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]

The token s.CLD1rHKsJYqZCwa2nm1GTJKy used here is the initial root token generated during the previous step when Vault is initialized and unsealed.

You are now logged in and ready to setup a secret for the web application.

Set a secret in Vault

The web application, that you deploy in the final step, expects Vault to store a username and password stored at the path secret/exampleapp/config. To create this secret requires that a key-value secret engine is enabled and a username and password is put at the specified path.

Enable kv-v2 secrets at the path secret:

$ vault secrets enable -path=secret kv-v2

NOTE This guide focuses on Vault's integration with Kubernetes and not interacting the key-value secrets engine. For more information refer to the Static Secrets: Key/Value Secret guide.

Put a username and password secret at the path secret/exampleapp/config:

vault kv put secret/exampleapp/config username="helmchart" password="secrets"

Verify that the secret are defined at the path secret/exampleapp/config:

vault kv get secret/exampleapp/config

You verified that your connection with the Vault server and created a secret for the web application.

Configure Kubernetes authentication

The initial root token is a privileged user that can perform any operation and this web application only requires the ability to read a single secret. A policy can define a smaller set of capabilities and these policies are assigned to an authentication method.

Vault provides a Kubernetes authentication method that enables clients to authenticate with a Kubernetes Service Account Token.

Enable the Kubernetes authentication method:

$ vault auth enable kubernetes

Vault accepts this service token from any client within the Kubernetes cluster. During authentication, Vault verifies that the service account token is valid by querying a configured Kubernetes endpoint. To configure the Kubernetes authentication method requires extracting details about the cluster.

Define an environment variable that is the name of the Vault serviceaccount (SA) token:

$ export VAULT_SA_NAME=$(kubectl get sa vault -o jsonpath="{.secrets[*]['name']}")

Define an environment variable that is the serviceaccount's base64 encoded JSON Web Token (JWT):

$ export SA_JWT=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data.token}" | base64 --decode; echo)

Define an environment variable that is the certificate authority's (CA) certificate for the Kubernetes host:

$ export SA_CA_CRT=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)

Define an environment variable for the ip address of the Kubernetes host in Minikube:

$ export K8S_HOST=$(minikube ip)

Configure the Kubernetes authentication method to use the service account token, the location of the Kubernetes host, and its certificate:

$ vault write auth/kubernetes/config \
        token_reviewer_jwt="$SA_JWT" \
        kubernetes_host="https://$K8S_HOST:8443" \
        kubernetes_ca_cert="$SA_CA_CRT"
Success! Data written to: auth/kubernetes/config

For a client to read the secret data defined in the previous step, at secret/exampleapp/config, requires that the read capability be granted for the path secret/data/exampleapp/config.

Write out the policy named exampleapp that enables the read capability for secrets at path secret/data/exampleapp/config

$ vault policy write exampleapp - <<EOH
path "secret/data/exampleapp/config" {
  capabilities = ["read"]
}
EOH
Success! Uploaded policy: exampleapp

Create a Kubernetes authentication role, named exampleapp, that connects the Kubernetes service account name and policy:

$ vault write auth/kubernetes/role/exampleapp \
        bound_service_account_names=vault \
        bound_service_account_namespaces=default \
        policies=exampleapp \
        ttl=24h
Success! Data written to: auth/kubernetes/role/exampleapp

Lastly, the Kubernetes service account, named vault in the last command, was automatically created when the Vault Helm chart was installed. To complete the configuration requires that the service account be granted a cluster-wide role that has access to the Kubernetes TokenReview API.

Update the Kubernetes service account responsibilities to include token review:

$ kubectl apply --filename vault-auth-service-account.yaml
clusterrolebinding.rbac.authorization.k8s.io/role-tokenreview-binding created

The web application can now authenticate and the token that it is granted has the capability of reading the secret.

Launch a web application

The example web application performs the single function of listening for requests on port 8080. During a request it reads the Kubernetes service token, logs into Vault, and then requests the secret.

Deploy the exampleapp in Kubernetes by applying the file k8s-exampleapp.yaml:

$ kubectl apply --filename k8s-exampleapp.yaml

The example application runs as a pod within the default namespace.

Get all the pods within the default namespace:

$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
consul-consul-w769k          1/1     Running   0          152m
consul-consul-server-0       1/1     Running   0          152m
exampleapp-5c76d96c6-r4mcq   1/1     Running   0          3m3s
vault-0                      1/1     Running   0          148m

The example service appears here as the pod named exampleapp-5c76d96c6-r4mcq.

NOTE The deployment of the service requires that the web application container is retrieved from Docker Hub. This appears as the STATUS of ContainerCreating. The pod reports that it is not ready (0/1).

This web application, running in the exampleapp-5c76d96c6-r4mcq pod, is running an HTTP service that is listening on port 8080.

In another terminal, port forward all requests made to http://localhost:8080 to the exampleapp pod on port 8200:

$ kubectl port-forward exampleapp-5c76d96c6-r4mcq 8080:8080

In the original terminal, execute a curl request at http://localhost:8080:

$ curl http://localhost:8080
{"password"=>"secret", "username"=>"helmchart"}%

Web application showing username and password secret

Summary

For more on Consul's integration with Kubernetes (including multi-cloud, service sync, and other features), see the Consul with Kubernetes documentation. The Consul and Vault Helm charts provide configuration settings that can be explored in their respective repositories.