HashiCorp Cloud Platform (HCP) Consul is a fully managed Service Mesh as a Service (SMaaS) version of Consul. After you deploy an HCP Consul server cluster, you must deploy Consul clients into your network so you can leverage Consul's full feature set including service mesh and service discovery. For Azure, HCP Consul supports Consul clients running on Azure Kubernetes Service (AKS) and virtual machine (VM) workloads.
In this tutorial, you will configure an AKS cluster to connect to your HCP Consul cluster. Then, you will deploy HashiCups, a demo application that lets you view and order customized HashiCorp branded coffee, to your AKS cluster to leverage HCP Consul's service mesh and API Gateway capabilities.
»Prerequisites
For this tutorial, you will need:
- the Terraform 0.14+ CLI installed locally.
- An HCP account configured for use with Terraform
- an Azure account
- the Azure CLI
- kubectl
- helm
- jq
In order for Terraform to run operations on your behalf, login into Azure.
»Clone example repository
In your terminal, clone the example repository. This repository contains Terraform configuration to deploy different types of Consul clusters, including the one you will need in this tutorial.
Navigate to the project directory in the cloned repository.
»Review configuration
The project directory contains three sub-directories:
The
azure
subdirectory contains Terraform configuration to deploy an Azure resource group, virtual network (VNet) and underlying networking resources, and an AKS cluster. Notice these specific AKS configuration:- The
vnet_subnet_id
is set to the VNet's subnet. By default, AKS creates a virtual network and subnet for you. By specifying the VNet subnet, you ensure the AKS node pool must reside in the VNet peered with HCP Consul. - The
network_plugin
is set toazure
. By default, AKS clusters usekubenet
. By specifying thenetwork_plugin
, you ensure that the AKS cluster uses the Azure networking. - The
service_principal
stanza lets you access your AKS cluster without using managed identity.
datacenter-deploy-aks-hcp/azureNote: The
hashicorp/hcp-consul/azurerm
Terraform module creates a security group that allows TCP/UDP ingress traffic on port8301
and allows all egress. The egress security rule lets the VM instance download dependencies required for the Consul client including the Consul binary and Docker.- The
The
hcp
subdirectory contains Terraform configuration that creates an HCP HashiCorp Virtual Network (HVN), and an HCP Consul. In addition, it uses thehashicorp/hcp-consul/azurerm
Terraform module to set up all networking rules to allow a Consul client to communicate with the HCP Consul servers. This includes setting up the peering connection between the HVN and your VNet, setting up the HCP routes, and creating VNet ingress rules.In addition, this configuration defines an ingress rule that allows port
8080
traffic. This ensures that users can visit the HashiCups demo application.The
hashicups
subdirectory contains the HashiCups demo applicationyaml
files for Kubernetes. These files define theDeployments
andServices
for each HashiCups microservices, in addition to setting up Consul service intentions and API Gateway.
This tutorial intentionally separates the Terraform configuration into two discrete steps. This process reflects Terraform best practices. By dividing the HCP Consul cluster management from the Consul client management (AKS), you can separate the duties and reduce the blast radius.
»Deploy Azure resources
Now that you have reviewed the Terraform configuration, you will now deploy an Azure resource group, virtual network (VNet) and underlying networking resources, and an AKS cluster.
Navigate to the azure
directory.
Next, create an Active Directory service principal account. You will use this AD service principal account to authenticate to the AKS cluster.
Rename the terraform.tfvars.example
file to terraform.tfvars
.
Note: The .gitignore
file in this repository includes any .tfvars
files to prevent you from accidentally committing your credentials to version
control.
Open the terraform.tfvars
file and replace the appId
and password
values
with those displayed in your output from the previous command.
Initialize the Terraform configuration.
Next, apply the configuration. Respond yes
to the prompt to confirm.
Notice that Terraform displays the outputs created from the apply.
»Create terraform.tfvars file for Consul client directory
Since you created the underlying infrastructure with Terraform, you can use the outputs to help you deploy the HCP resources in the next section.
Create a terraform.tfvars
file in the hcp
directory with the Terraform
outputs from this project.
»Configure kubectl
Run the following command to retrieve the access credentials for your cluster
and automatically configure kubectl
.
»Deploy HCP Consul and resources
Navigate to the hcp
directory.
Initialize the Terraform configuration.
Next, apply the configuration. Respond yes
to the prompt to confirm.
Notice that Terraform displays the outputs created from the apply.
»Configure development host
Now that you have deployed HCP Consul, you need to retrieve the Consul client configuration information. You will use this information to configure the Consul client so it can connect to the HCP Consul cluster. In addition, you will retrieve the ACL token and HCP Consul public URL to authenticate your Consul CLI.
Tip: If you started from the HCP Portal, select Start from HCP Portal for instructions pertinent to your deployment method.
You can use the HCP Portal to retrieve the client configuration information that you need to connect your AKS cluster client agents to your HCP Consul cluster. Navigate to the Consul resource page in the HCP portal, and then select the Consul cluster you want to join your AKS cluster client agents to. Click the Access Consul dropdown and then click Download to install Client Agents to download a zip archive that contains the necessary files to join your client agents to the cluster.
The archive includes a default client configuration and certificate. Both should be considered secrets, and should be kept in a secure location.
Unzip the client config package into the current working directory, and then use
ls
to confirm that both the client_config.json
and ca.pem
files are
available.
Now you will set your CONSUL_HTTP_ADDR
and CONSUL_HTTP_TOKEN
environment
variables so you can verify Consul installation later.
First, set your CONSUL_HTTP_ADDR
environment variable.
Then, set your CONSUL_HTTP_TOKEN
environment variable.
Create a consul
namespace in your Kubernetes cluster. Your Consul secrets and
resources will be created in this namespace.
»Configure Consul secrets
In this tutorial, you will apply HCP Consul's secure-by-default design by configuring the Consul clients with the gossip encryption key, the Consul CA cert, and a permissive ACL token. You need to store all three of these secrets in the Kubernetes secrets engine so that you can reference them during the helm chart installation.
Use the ca.pem
file in the current directory to create a Kubernetes secret to
store the Consul CA certificate.
The Consul gossip encryption key is embedded in the client_config.json
file
that you downloaded and extracted into your current directory. Issue the
following command to create a Kubernetes secret that stores the Consul gossip
key encryption key. The following command uses jq
to extract the value from
the client_config.json
file.
The last secret you need to add is an ACL bootstrap token. You can use the one
you set to your CONSUL_HTTP_TOKEN
environment variable earlier. Issue the
following command to create a Kubernetes secret to store the bootstrap ACL
token.
Note: If you are configuring a production environment, you should create a client token with a minimum set of privileges. For an in depth review of how to configure ACLs for Consul, refer to the Secure Consul with Access Control Lists tutorial or the official documentation.
»Create Consul configuration file
Extract some more configuration values from the client_config.json
file and
set them to environment variables that can be used to generate your Helm values
file. Issue the following command to set the DATACENTER
environment variable.
Extract the private server URL from the client config so that it can be set in
the Helm values file as the externalServers:hosts
entry. This value will be
passed as the retry-join
option to the Consul clients.
Extract the public server URL from the client config so that it can be set in
the Helm values file as the k8sAuthMethodHost
entry.
Note: The following script relies on your cluster matching your
current-context name. If you have created an alias for your context, or the
current-context name does not match the cluster name for any other reason, you
must manually set the KUBE_API_URL
to your AKS cluster's API server URL. You
can use kubectl config view
to view your cluster, and retrieve API server URL.
Validate that you have set environment variables correctly.
The output should look similar to the following:
Note: If any of these environment variables are not correctly set, the following script will generate an incomplete Helm values file, and the Consul Helm installation will not succeed.
Generate the Helm values file. Note that this configuration sets up an ingress gateway. This will be covered in more depth later.
Note: The value for the global.name
configuration must be unique for
each Kubernetes cluster where Consul clients are installed and configured to
join Consul as a shared service, such as HCP Consul. You can change the global
name through the global.name
value in the Helm chart.
How you install Consul Clients depends on your HCP Consul Cluster:
- To install clients on a single Consul cluster, choose the
Single Consul Cluster
tab. - To install clients on a primary or secondary Consul cluster that is part of a
federated environment, select the
Federated Consul Cluster
tab.
Open your config.yaml
file to validate that the config file is populated
correctly.
»Deploy Consul on AKS
Now that you have customized your config.yaml
file, you can deploy Consul with
Helm or the Consul K8S CLI. We recommend you deploy Consul into its own
dedicated namespace as shown below. This should only take a few minutes. The
Consul pods should appear in the pod list immediately.
Note: HCP offers Enterprise features. To interact with these enterprise features, you need to install the Enterprise Consul binary for your client agents. You can find more information about Consul Enterprise and the Helm Chart in the Consul Enterprise Helm Chart documentation.
First, install the HashiCorp tap, a repository of all HashiCorp's Homebrew packages.
Now, install consul-k8s
.
Finally, deploy Consul on your AKS cluster. When prompted, confirm the
installation with a y
.
Visit the official Consul K8S CLI documentation to learn more about additional settings.
Confirm you deployed Consul successfully onto your AKS cluster.
»Verify Consul on AKS
Now that you have deployed Consul on AKS, verify that the Consul datacenter contains both your HCP Consul server nodes and your client nodes running on AKS. Notice there are 3 client nodes — one for each node in your AKS node pool.
»Deploy an example workload
Now you have set up Consul clients on your AKS cluster, it is time to deploy an application workload. This tutorial will use the HashiCups demo application.
Navigate to the hashicups
directory.
This directory contains the necessary yaml
files to deploy the HashiCups
application. Issue the following command to deploy the application to your AKS
cluster.
»Register a Consul ingress gateway
In Consul, an ingress gateway is a kind of resources called a Config Entry. An ingress gateway is an endpoint that allows external traffic to reach specific resources, such as the application user interface, that are hosted within the datacenter.
Open ingress-gateway/ingress-gateway.yaml
. You will use this to register an
ingress gateway with Consul. Notice the Name
value matches the
ingressGateways:gateways:name
entry in config.yaml
. These fields must
match.
Notice that the ingress-gateway
listener is set to port 8080
. This is the
default ingress gateway port when you set up ingress gateway through Helm or
consul-k8s
.
Now, register the config entry with Consul.
»Create the necessary intentions
Since HCP Consul on Azure is secure by default, the datacenter is created with a "default deny" intention in place. This means that, by default, no services can interact with each other until an operator explicitly allows them to do so by creating intentions for each inter-service operation they wish to allow.
As of Consul 1.9, service-intentions
config entries can be created using CRDs.
Open intentions/intentions.yaml
. This file defines the intentions necessary
for HashiCups. Notice this also creates an intention from the ingress-gateway
to the nginx
service, the HashiCups entrypoint.
Run the following command to create the service-intentions
config entries the
HashiCups application requires to run.
»Access the HashiCups UI
With the intentions in place, all that remains is to retrieve the public URL and port of the ingress gateway. Do this by retrieving a list of services.
Use the following command to verify you have successfully deployed the application.
Open the URL in your browser to find the HashiCups UI.
Note: If you are unable to load the HashiCups UI, verify you have an
Azure ingress rule that allows port 8080
.
This validates that Consul service discovery is working, because the services are able to resolve the upstreams. This also validates that Consul service mesh is working, because the intentions that were created are allowing services to interact with one another.
»Next steps
In this tutorial, you connected Consul clients on AKS to HCP Consul and deployed a demo application. To keep learning about Consul's features, and for step-by-step examples of how to perform common Consul tasks, complete one of the following tutorials.
- Explore the Consul UI
- Review recommend practices for Consul on Kubernetes
- Deploy a metrics pipeline with Prometheus and Grafana
If you encounter any issues, please contact the HCP team at support.hashicorp.com.