In this tutorial you will deploy a Consul datacenter on Azure Kubernetes Service (AKS) with the official Helm chart. You do not need to update any values in the Helm chart for a basic installation. However, you can create a values file with parameters to allow access to the Consul UI. At the beginning of this tutorial, you can watch an optional Azure Friday demo.
Security Warning This tutorial is not for production use. By default, the chart will install an insecure configuration of Consul. Please refer to the Kubernetes documentation to determine how you can secure Consul on Kubernetes in production. Additionally, it is highly recommended to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features.
»Prerequisites
To complete this tutorial successfully, you should have an Azure account with the ability to create a Kubernetes cluster.
All the tools you need are installed in the Azure Cloud Shell. Visit the Cloud Shell to run this example. We used the Linux bash shell.
The code for this example is in a git repository. Clone this repository within your cloud shell before starting the rest of the tutorial.
$ git clone https://github.com/hashicorp/demo-consul-101.git
NOTE: This example uses Terraform 0.12 and Helm 3 which are installed on the Azure Cloud Shell
»Watch the Azure Friday demo - optional
This 12-minute video was created by HashiCorp and Azure to demonstrate Consul service mesh capabilities on AKS.
»AKS configuration
You'll create a Kubernetes cluster on Azure Kubernetes Service and run Consul on it together with a few microservices which use Consul to discover each other and communicate securely with Consul Connect (Consul's service mesh feature).
»Create an AKS cluster with Terraform
First, create an Azure Kubernetes Service cluster. You'll use Terraform to create the cluster with the features you need for this demo.
Change into the k8s/terraform/azure/01-create-aks-cluster
directory.
$ cd demo-consul-101/k8s/terraform/azure/01-create-aks-cluster
Run the az
command with the following arguments to create an Active Directory
service principal account for this demo. If it works correctly, you'll get as output a
JSON snippet that includes your appId
, password
, and other values.
$ az ad sp create-for-rbac --skip-assignment
{
"appId": "aaaa-aaaa-aaaa",
"displayName": "azure-cli-2019-04-11-00-46-05",
"name": "http://azure-cli-2019-04-11-00-46-05",
"password": "aaaa-aaaa-aaaa",
"tenant": "aaaa-aaaa-aaaa"
}
Use these values to configure Terraform. Open a new terraform.tfvars
file in
the in-browser text editor from the cloud shell with the code
command.
$ code terraform.tfvars
Next, copy the JSON output of the az
command above and paste it into the new
terraform.tfvars
file. Edit the contents to conform to Terraform variable
style (remove curly braces, quotes around variable names, use the =
sign for assignment and remove the trailing commas):
# terraform.tfvars
appId="aaaa-aaaa-aaaa"
password="aaaa-aaaa-aaaa"
NOTE: Only the appId
and password
values should be listed
Now you're ready to initialize the Terraform project.
$ terraform init
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "azurerm" (1.27.0)...
Terraform has been successfully initialized!
The final step in this section is to run terraform apply
to create the
cluster. Respond with yes
when prompted.
$ terraform apply
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
kubernetes_cluster_name = demo-aks
resource_group_name = demo-rg
NOTE: It may take as many as 10 minutes to provision the AKS cluster.
Optionally, you may review the Terraform files to check the configuration code
needed to create the cluster on AKS. Note the lines which
specify that role_based_access_control
should be used.
resource "azurerm_kubernetes_cluster" "default" {
# ...
role_based_access_control {
enabled = true
}
# ...
}
»Enable the Kubernetes dashboard
In order to use the Kubernetes dashboard, you need to create a ClusterRoleBinding
:
$ kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
```plaintext
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
At this point, all the necessary prerequisites should be installed and running.
While still in the cloud shell, you can use the az aks browse
command to open a new web browser tab with
the Kubernetes dashboard.
# View k8s dashboard
$ az aks browse --resource-group demo-rg --name demo-aks
The Kubernetes dashboard will open in your web browser.
»Consul configuration
Now that your AKS cluster is running, you're ready to install Consul to the cluster. Consul can run inside or outside of a Kubernetes cluster but for this demo you will use containers to run Consul itself inside of Kubernetes pods.
»Install Consul with Helm
Move out to the k8s
directory in the project.
$ cd ~/demo-consul-101/k8s
Add the HashiCorp Helm Chart repository:
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
Optionally, open the helm-consul-values.yaml
file with the code
command to
review the configuration that the Helm chart will use. You'll see that a datacenter name is
specified, a load balancer is configured, and the Consul UI will be exposed
through the load balancer.
$ code helm-consul-values.yaml
We can now use helm
to install Consul using the hashicorp/consul
chart.
$ helm install azure hashicorp/consul -f helm-consul-values.yaml
NAME: azure
LAST DEPLOYED: Thu Apr 11 01:09:01 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
It may take a few minutes for the pods to spin up. When they are ready, you can view the Consul UI in your web browser.
TIP: Use the --watch
flag to wait for the load balancer to spin up.
# View Consul UI
$ kubectl get service azure-consul-ui --watch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure-consul-ui LoadBalancer 10.0.134.144 80:31768/TCP 35s
azure-consul-ui LoadBalancer 10.0.134.144 52.151.17.26 80:31768/TCP 104s
It may take a few minutes, but you'll get an entry for EXTERNAL-IP
. Open that
IP address in your web browser and you'll reach the Consul UI.
NOTE: It may take some time for Consul to be ready to serve the UI. Occasionally in our testing, Core DNS lookups failed for five minutes in the new cluster which prevented Consul from starting properly.
Click through to the Nodes screen and you'll see several Consul servers and agents running.
»Deploy microservices
As the final deployment step, you will deploy a few containers which contain
microservices. A back end counting
service returns a JSON snippet with an
incrementing number. A dashboard
service displays the number that it finds
from the counting
service and also displays debugging information when the
backend service can be found or is unreachable.
The YAML files for these microservices are contained in the 04-yaml-connect-envoy
directory.
Use the standard kubectl
command to apply
them to the cluster.
$ kubectl apply -f 04-yaml-connect-envoy
pod/counting created
pod/dashboard created
service/dashboard-load-balancer created
You should get an output showing that a counting
pod and a dashboard
pod have
been created, along with a load balancer for the dashboard
service.
Use the kubectl
command again to find the IP address of the dashboard
load balancer.
$ kubectl get service dashboard-load-balancer --watch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-load-balancer LoadBalancer 10.0.187.224 52.247.202.123 80:31622/TCP 54s
Open EXTERNAL-IP
in your web browser.
The page will show a number that was fetched from the backend counting
API. It will
increment every few seconds.
»Configure intentions
Consul can be configured to allow access between services or block access.
Go to the Consul UI IP address as mentioned previously. Find the Intentions tab. Click the Create button.
Create an intention from *
to *
as a Deny intention. Click Save.
Back in the web browser, find the microservice dashboard as mentioned previously. It should now state that the Counting Service is Unreachable.
Back at the Consul UI, create another intention to allow communication. Click Create. Select a Source Service of dashboard and a Destination Service of counting. Choose the allow radio button. Finally, click the Create button.
Back at the microservice dashboard, the page should show that it is again Connected and shows a new number every few seconds.
»Destroy the demo
Now that you have created an AKS cluster, deployed Consul with helm
, and
deployed applications, you can destroy
the cluster. This requires only one
step.
Move back into the terraform/azure/01-create-aks-cluster
directory.
$ cd terraform/azure/01-create-aks-cluster/
Run terraform destroy
.
$ terraform destroy
Plan: 0 to add, 0 to change, 2 to destroy.
Destroy complete! Resources: 2 destroyed.
NOTE: This operation could take up to 10 minutes.
»Explanation
This tutorial covers the steps needed to deploy and configure a datacenter as an operator. Additional steps not mentioned include development tasks such as creating a Golang web application, building Docker containers for each part of the application, configuring Consul and Kubernetes from init containers, writing YAML to deploy the containers and associated environment variables.
This tutorial will not go into detail about all the steps required, but the code is available for you to view. In particular, look for:
- Entire application and all configuration in the
k8s
directory. - YAML for Kubernetes in the
04-yaml-connect-envoy
directory. This includes configuration for thecounting
anddashboard
services, including annotations to enable Consul Connect service mesh sidecar proxies and send environment variables to the relevant Docker containers. - Application containers in the
counting-service
anddashboard-service
directories. These run several microservices and accept configuration via environment variables.
»Next steps
In this tutorial you learned to deploy a Consul datacenter on Azure Kubernetes Service with the official Helm chart. Terraform configurations for AKS and Helm can make the process more consistent and automated. Helm charts and Docker containers run microservices and connect to each other securely with Consul Connect service mesh.
Further steps can be taken to secure the entire datacenter, connect to other Consul datacenters, or deploy additional microservices that can find each other with Consul service discovery and connect securely with Consul Connect service mesh.
For additional reference documentation on Azure Kubernetes Service or HashiCorp Consul, refer to these websites: