Run Consul on Kubernetes

Deploy Consul on Azure Kubernetes Service (AKS)

In this guide you will deploy a Consul datacenter on Azure Kubernetes Service (AKS) with the official Helm chart. You do not need to update any values in the Helm chart for a basic installation. However, you can create a values file with parameters to allow access to the Consul UI. At the beginning of this guide, you can watch an optional Azure Friday demo.


To complete this guide successfully, you should have an Azure account with the ability to create a Kubernetes cluster.

All the tools you need are installed in the Azure Cloud Shell. Visit the Cloud Shell to run this example. We used the Linux bash shell.

The code for this example is in a git repository. Clone this repository within your cloud shell before starting the rest of the tutorial.

$ git clone

»Watch the Azure Friday demo - optional

This 12-minute video was created by HashiCorp and Azure to demonstrate Consul service mesh capabilities on AKS.

»AKS Configuration

You'll create a Kubernetes cluster on Azure Kubernetes Service and run Consul on it together with a few microservices which use Consul to discover each other and communicate securely with Consul Connect (Consul's service mesh feature).

»Create an AKS Cluster with Terraform

First, create an Azure Kubernetes Service cluster. We'll use Terraform to create the cluster with the features we need for this demo.

Change into the k8s/terraform/azure/01-create-aks-cluster directory.

$ cd demo-consul-101/k8s/terraform/azure/01-create-aks-cluster

Run the az command with the following arguments to create an Active Directory service principal account for this demo. If it works correctly, you'll see a JSON snippet that includes your appId, password, and other values.

$ az ad sp create-for-rbac --skip-assignment
  "appId": "aaaa-aaaa-aaaa",
  "displayName": "azure-cli-2019-04-11-00-46-05",
  "name": "http://azure-cli-2019-04-11-00-46-05",
  "password": "aaaa-aaaa-aaaa",
  "tenant": "aaaa-aaaa-aaaa"

Use these values to configure Terraform. Open a new terraform.tfvars file in the in-browser text editor from the cloud shell with the code command.

$ code terraform.tfvars

Next, copy the JSON output of the az command above and paste it into the new terraform.tfvars file. Edit the contents to conform to Terraform variable style (remove curly braces, quotes around variable names, use the = sign for assignment and remove the trailing commas):

# terraform.tfvars

Now you're ready to initialize the Terraform project.

$ terraform init

Initializing provider plugins...
- Checking for available provider plugins on
- Downloading plugin for provider "azurerm" (1.27.0)...

Terraform has been successfully initialized!

The final step in this section is to run terraform apply to create the cluster. Respond with yes when prompted.

$ terraform apply

Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.


kubernetes_cluster_name = demo-aks
resource_group_name = demo-rg

Optionally, you may review the Terraform files to see the configuration code needed to create the cluster on AKS. Note the lines which specify that role_based_access_control should be used.

resource "azurerm_kubernetes_cluster" "default" {
  # ...

  role_based_access_control {
    enabled = true

  # ...

»Enable the Kubernetes Dashboard

In order to use the Kubernetes dashboard, we need to create a ClusterRoleBinding:

$ kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard created

At this point, all the necessary prerequisites should be installed and running. While still in the cloud shell, you can use the az aks browse command to open a new web browser tab with the Kubernetes dashboard.

# View k8s dashboard
$ az aks browse --resource-group demo-rg --name demo-aks

The Kubernetes dashboard will open in your web browser.

»Consul Configuration

Now that your AKS cluster is running, you're ready to install Consul to the cluster. Consul can run inside or outside of a Kubernetes cluster but for this demo we will use containers to run Consul itself inside of Kubernetes pods.

»Install Consul with Helm

Move out to the k8s directory in the project.

$ cd ~/demo-consul-101/k8s

Add the HashiCorp Helm Chart repository:

$ helm repo add hashicorp
"hashicorp" has been added to your repositories

Optionally, open the helm-consul-values.yaml file with the code command to review the configuration that the Helm chart will use. You'll see that a datacenter name is specified, a load balancer is configured, and the Consul UI will be exposed through the load balancer.

$ code helm-consul-values.yaml

We can now use helm to install Consul using the hashicorp/consul chart.

$ helm install azure hashicorp/consul -f helm-consul-values.yaml

NAME:   azure
LAST DEPLOYED: Thu Apr 11 01:09:01 2019
NAMESPACE: default


It may take a few minutes for the pods to spin up. When they are ready, you can view the Consul UI in your web browser.

# View Consul UI
$ kubectl get service azure-consul-ui --watch

NAME              TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
azure-consul-ui   LoadBalancer   <pending>     80:31768/TCP   35s
azure-consul-ui   LoadBalancer  80:31768/TCP   104s

It may take a few minutes, but you'll see an entry for EXTERNAL-IP. View that IP address in your web browser and you'll see the Consul UI.

Consul Services

Click through to the Nodes screen and you'll see several Consul servers and agents running.

Consul Nodes

»Deploy Microservices

As the final deployment step, let's deploy a few containers which contain microservices. A back end counting service returns a JSON snippet with an incrementing number. A dashboard service displays the number that it finds from the counting service and also displays debugging information when the backend service can be found or is unreachable.

The YAML files for these microservices are contained in the 04-yaml-connect-envoy directory.

Use the standard kubectl command to apply them to the cluster.

$ kubectl apply -f 04-yaml-connect-envoy

pod/counting created
pod/dashboard created
service/dashboard-service-load-balancer created

You should see output showing that a counting pod and a dashboard pod have been created, along with a load balancer for the dashboard service.

Use the kubectl command again to find the IP address of the dashboard load balancer.

$ kubectl get service dashboard-service-load-balancer --watch

NAME                              TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
dashboard-service-load-balancer   LoadBalancer   80:31622/TCP   54s

Open EXTERNAL-IP in your web browser.

Dashboard service

You'll see a number that was fetched from the backend counting API. It will increment every few seconds.

»Configure Intentions

Consul can be configured to allow access between services or block access.

Go to the Consul UI IP address as mentioned previously. Find the Intentions tab. Click the Create button.

Create an intention from * to * as a Deny intention. Click Save.

Deny intention

Back in the web browser, find the microservice dashboard as mentioned previously. You'll see that the Counting Service is Unreachable.


Back at the Consul UI, create another intention to allow communication. Click Create. Select a Source Service of dashboard and a Destination Service of counting. Choose the allow radio button. Finally, click the Create button.

Allow intention

Back at the microservice dashboard, you will see that it is again Connected and shows a new number every few seconds.


»Destroy the Demo

Now that you have created an AKS cluster, deployed Consul with helm, and deployed applications, you can destroy the cluster. This requires only one step.

Move back into the terraform/azure/01-create-aks-cluster directory.

$ cd terraform/azure/01-create-aks-cluster/

Run terraform destroy.

$ terraform destroy

Plan: 0 to add, 0 to change, 2 to destroy.

Destroy complete! Resources: 2 destroyed.


This guide covers the steps needed to deploy and configure a cluster as an operator. Additional steps not mentioned include development tasks such as creating a Golang web application, building Docker containers for each part of the application, configuring Consul and Kubernetes from init containers, writing YAML to deploy the containers and associated environment variables.

This guide will not go into detail about all the steps required, but the code is available for you to view. In particular, look for:

  • Entire application and all configuration in the k8s directory.
  • YAML for Kubernetes in the 04-yaml-connect-envoy directory. This includes configuration for the counting and dashboard services, including annotations to enable Connect sidecar proxies and send environment variables to the relevant Docker containers.
  • Init containers in the counting-init and dashboard-init directories. These contain shell scripts that register services with Consul. A Kubernetes init container runs before the related application container and has access to port numbers so the service can be configured.
  • Application containers in the counting-service and dashboard-service directories. These run several microservices and accept configuration via environment variables.


In this guide you learned to deploy a Consul datacenter on Azure Kubernetes Service with the official Helm chart. Terraform configurations for AKS and Helm can make the process more consistent and automated. Helm charts and Docker containers run microservices and connect to each other securely with Consul Connect.

Further steps can be taken to secure the entire cluster, connect to other clusters in other datacenters, or deploy additional microservices that can find each other with Consul service discovery and connect securely with Consul Connect.

For additional reference documentation on Azure Kubernetes Service or HashiCorp Consul, refer to these websites: