April 6 & 7
Learn about Vault, Consul, & more at HashiDays Sydney in Australia Register Now

Run Consul on Kubernetes

Deploy Consul with Kubernetes on AWS

In this guide you will deploy a Consul datacenter to Elastic Kubernetes Services (EKS) on Amazon Web Services (AWS) with HashiCorp’s official Helm chart. You do not need to override any values in the Helm chart for a basic installation. However, we will be creating a config file with custom values to allow access to the Consul UI.

Prerequisites

Installing aws-cli, kubectl, and helm CLI tools

To follow this guide, you will need the aws-cli binary installed, as well as kubectl and helm.

Reference the following instruction for setting up aws-cli as well as general documentation:

Reference the following instructions to download kubectl and helm:

Installing helm and kubectl with Homebrew on MacOS

Homebrew allows you to quickly install both Helm and kubectl on MacOS.

$ brew install kubernetes-cli
$ brew install kubernetes-helm

VPC and security group creation

The AWS documentation for creating an EKS cluster assumes that you have a VPC and a dedicated security group created. The instructions on how to create these are here:

You will need the SecurityGroups, VpcId, and SubnetId values for the EKS cluster creation step.

Create an EKS cluster

Review the AWS documentation for creating and administering a EKS cluster within AWS.

Configure kubectl to talk to your cluster

Setting up kubectl to talk to your EKS cluster should be as simple as running the following:

aws eks --region <region where you deployed your cluster> update-kubeconfig --name <your cluster name>

You can then run kubectl cluster-info to verify you are connected to your Kubernetes cluster:

$ kubectl cluster-info
Kubernetes master is running at https://<your K8s master location>.eks.amazonaws.com
CoreDNS is running at https://<your CoreDNS location>.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

You can also review the documentation for confinguring kubectl and EKS here:

Deploy Consul

You can deploy a complete Consul datacenter using the official Helm chart. By default, the chart will install three Consul servers and one client per Kubernetes nodes in your EKS cluster. You can review the Helm chart values to learn more about the default settings.

Download the Helm chart

First, you will need to clone the official Helm chart from HashiCorp's GitHub repo.

$ git clone https://github.com/hashicorp/consul-helm.git

For testing, it is not necessary to update the Helm chart before deploying Consul, it comes with reasonable defaults. However, for this guide, you will update several values to customize the installation and enable access to the UI. Review the Helm chart documentation to learn more about the chart.

Creating a values file

To customize your deployment, you can pass a yaml file to be used during the deployment, it will override the Helm chart's defaults. The following values changes your datacenter name, enables the Consul UI via a service, and enables the syncCatalog feature.

# helm-consul-values.yaml
global:
  datacenter: hashidc1

ui:
  service:
    type: 'LoadBalancer'

syncCatalog:
  enabled: true

Install Consul with Helm

Helm 2

If you're using Helm 2, you need to install and configure Tiller. If using Helm 3, skip ahead to the next section, "Helm Install."

$ helm init
$HELM_HOME has been configured at ~/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Create a Kubernetes service account called "tiller":

$ kubectl --namespace kube-system create serviceaccount tiller
serviceaccount/tiller created

Next, create a Kubernetes clusterrolebinding between the cluster-admin role and the tiller service account. You do not need to customize the following command:

$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created

Then, patch your kube-system namespace to respect the tiller service account:

$ kubectl --namespace kube-system patch deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Helm install

Now, you can deploy Consul using helm install. You will be passing in the values file we created above and the location of our helm chart. We recommend verifying your install/upgrade with --dry-run prior to your actual run.

$ helm install -f helm-consul-values.yaml hashicorp ./consul-helm

The output of helm install will show you the details of your Consul deployment, but you can also use kubectl get pods to verify your cluster. You should see three servers and three clients:

$ consul members
Node                                               Address        Status  Type    Build  Protocol  DC        Segment
hashicorp-consul-server-0                          10.8.1.9:8301  alive   server  1.6.1  2         hashidc1  <all>
hashicorp-consul-server-1                          10.8.2.4:8301  alive   server  1.6.1  2         hashidc1  <all>
hashicorp-consul-server-2                          10.8.0.8:8301  alive   server  1.6.1  2         hashidc1  <all>
ip-172-31-95-163.ec2.internal  10.8.0.7:8301  alive   client  1.6.1  2         hashidc1  <default>
ip-172-31-81-128.ec2.internal  10.8.1.8:8301  alive   client  1.6.1  2         hashidc1  <default>
ip-172-31-87-198.ec2.internal  10.8.2.3:8301  alive   client  1.6.1  2         hashidc1  <default>

Accessing the Consul UI

Since you enabled the Consul UI in your values file, you can run kubectl get services to find the external IP of your UI service.

$ kubectl get services
NAME                       TYPE           CLUSTER-IP    EXTERNAL-IP             PORT(S)                                                                   AGE
consul                     ExternalName   <none>        consul.service.consul   <none>                                                                    4s
hashicorp-consul-dns       ClusterIP      10.12.8.3     <none>                  53/TCP,53/UDP                                                             63s
hashicorp-consul-server    ClusterIP      None          <none>                  8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   63s
hashicorp-consul-ui        LoadBalancer   10.12.6.197   104.198.132.100         80:30037/TCP                                                              63s
kubernetes                 ClusterIP      10.12.0.1     <none>                  443/TCP                                                                   77m

You can see above that, in this case, the UI is exposed at 104.198.132.100 over port 80. Navigate to the external IP in your browser to interact with the Consul UI:

Consul UI

Accessing Consul with the CLI and API

In addition to accessing Consul with the UI, you can manage Consul with the HTTP API or by directly connecting to the pod with kubectl.

You can use the Consul HTTP API by communicating to the local agent running on the Kubernetes node. You can read the documentation if you are interested in learning more about using the Consul HTTP API with Kubernetes.

Kubectl

To access the pod and data directory you can exec into the pod with kubectl to start a shell session.

$ kubectl exec -it hashicorp-consul-server-0 /bin/sh

This will allow you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul members.

$ consul members
Node                                               Address        Status  Type    Build  Protocol  DC        Segment
hashicorp-consul-server-0                          10.8.1.9:8301  alive   server  1.6.1  2         hashidc1  <all>
hashicorp-consul-server-1                          10.8.2.4:8301  alive   server  1.6.1  2         hashidc1  <all>
hashicorp-consul-server-2                          10.8.0.8:8301  alive   server  1.6.1  2         hashidc1  <all>
ip-172-31-95-163.ec2.internal  10.8.0.7:8301  alive   client  1.6.1  2         hashidc1  <default>
ip-172-31-81-128.ec2.internal  10.8.1.8:8301  alive   client  1.6.1  2         hashidc1  <default>
ip-172-31-87-198.ec2.internal  10.8.2.3:8301  alive   client  1.6.1  2         hashidc1  <default>

Using Consul environment variables

You can also access the Consul datacenter with your local Consul binary by setting the environment variables documented here.

In this case, since you are exposing HTTP via the load balancer/UI service, you can export our CONSUL_HTTP_ADDR to the same external IP we used to access the UI above:

$ export CONSUL_HTTP_ADDR=http://104.198.132.100:80

You can now use your local installation of the Consul binary to run Consul commands:

$ consul members
Node                                               Address        Status  Type    Build  Protocol  DC        Segment
hashicorp-consul-server-0                          10.8.1.9:8301  alive   server  1.6.1  2         hashidc1  <all>
hashicorp-consul-server-1                          10.8.2.4:8301  alive   server  1.6.1  2         hashidc1  <all>
hashicorp-consul-server-2                          10.8.0.8:8301  alive   server  1.6.1  2         hashidc1  <all>
ip-172-31-95-163.ec2.internal  10.8.0.7:8301  alive   client  1.6.1  2         hashidc1  <default>
ip-172-31-81-128.ec2.internal  10.8.1.8:8301  alive   client  1.6.1  2         hashidc1  <default>
ip-172-31-87-198.ec2.internal  10.8.2.3:8301  alive   client  1.6.1  2         hashidc1  <default>

Summary

In this guide, you deployed a Consul datacenter to AWS Elastic Kubernetes Service using the official Helm chart. You also configured access to the Consul UI. To learn more about deployment best practices, review the Kubernetes Reference Architecture guide.