In this tutorial you will deploy a Consul datacenter to Elastic Kubernetes Services(EKS) on Amazon Web Services (AWS) with HashiCorp’s official Helm chart. You do not need to override any values in the Helm chart for a basic installation. However, we will be creating a config file with custom values to allow access to the Consul UI.
Security Warning This tutorial is not for production use. By default, the chart will install an insecure configuration of Consul. Please refer to the Kubernetes documentation to determine how you can secure Consul on Kubernetes in production. Additionally, it is highly recommended to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features.
»Prerequisites
»Installing aws-cli, kubectl, and helm CLI tools
To follow this tutorial, you will need the aws-cli
binary installed, as well as kubectl
and helm
.
Reference the following instruction for setting up aws-cli
as well as general documentation:
Reference the following instructions to download kubectl
and helm
:
»Installing helm and kubectl with Homebrew on MacOS
Homebrew allows you to quickly install both Helm and kubectl
on MacOS.
$ brew install kubernetes-cli
Install helm
on MacOS with Homebrew.
$ brew install kubernetes-helm
»VPC and security group creation
The AWS documentation for creating an EKS cluster assumes that you have a VPC and a dedicated security group created. The instructions on how to create these are here:
You will need the SecurityGroups, VpcId, and SubnetId values for the EKS cluster creation step.
»Create an EKS cluster
Review the AWS documentation for creating and administering a EKS cluster within AWS.
»Configure kubectl to talk to your cluster
Setting up kubectl to talk to your EKS cluster should be as simple as running the following:
$ aws eks --region <region where you deployed your cluster> update-kubeconfig --name <your cluster name>
You can then run kubectl cluster-info
to verify you are connected to your Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://.eks.amazonaws.com
CoreDNS is running at https://.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
You can also review the documentation for configuring kubectl and EKS here:
»Deploy Consul
You can deploy a complete Consul datacenter using the official Helm chart. By default, the chart will install three Consul servers and one client per Kubernetes nodes in your EKS cluster. You can review the Helm chart values to learn more about the default settings.
»Add the HashiCorp Helm chart repository
First, you will need to add the HashiCorp Helm Chart repository:
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
For testing, it is not necessary to customize the default Helm chart config before deploying Consul because it comes with reasonable defaults. However, for this tutorial, you will update several values to customize the installation and enable access to the UI. Review the Helm chart documentation to learn more about the chart.
»Creating a values file
To customize your deployment, you can pass a yaml file to be used during the deployment, it will override the Helm chart's defaults. The following values changes your datacenter name and enables the Consul UI via a service.
# helm-consul-values.yaml
global:
datacenter: hashidc1
ui:
service:
type: 'LoadBalancer'
»Install Consul with Helm
Once the configuration is complete you can use Helm to install the chart.
Using Helm 3.x there is nothing extra to configure and you can install the chart directly.
»Helm install
Now, you can deploy Consul using helm install
. You will be passing in the values file we created above and the location of our helm chart.
We recommend verifying your install/upgrade with --dry-run
prior to your actual run.
$ helm install -f helm-consul-values.yaml hashicorp hashicorp/consul
The output of helm install will show you the details of your Consul deployment, but you can also use kubectl get pods
to verify your cluster.
You should expect three servers and three clients:
$ consul members
Node Address Status Type Build Protocol DC Segment
hashicorp-consul-server-0 10.8.1.9:8301 alive server 1.6.1 2 hashidc1
hashicorp-consul-server-1 10.8.2.4:8301 alive server 1.6.1 2 hashidc1
hashicorp-consul-server-2 10.8.0.8:8301 alive server 1.6.1 2 hashidc1
ip-172-31-95-163.ec2.internal 10.8.0.7:8301 alive client 1.6.1 2 hashidc1
ip-172-31-81-128.ec2.internal 10.8.1.8:8301 alive client 1.6.1 2 hashidc1
ip-172-31-87-198.ec2.internal 10.8.2.3:8301 alive client 1.6.1 2 hashidc1
»Accessing the Consul UI
Since you enabled the Consul UI in your values file, you can run kubectl get services
to find the external IP of your UI service.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName consul.service.consul 4s
hashicorp-consul-dns ClusterIP 10.12.8.3 53/TCP,53/UDP 63s
hashicorp-consul-server ClusterIP None 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 63s
hashicorp-consul-ui LoadBalancer 10.12.6.197 104.198.132.100 80:30037/TCP 63s
kubernetes ClusterIP 10.12.0.1 443/TCP 77m
You can verify that, in this case, the UI is exposed at 104.198.132.100
over port 80. Navigate to the external IP in your browser to interact with the Consul UI:
»Accessing Consul with the CLI and API
In addition to accessing Consul with the UI, you can manage Consul with the
HTTP API or by directly connecting to the pod with kubectl
.
You can use the Consul HTTP API by communicating to the local agent running on the Kubernetes node. You can read the documentation if you are interested in learning more about using the Consul HTTP API with Kubernetes.
»Kubectl
To access the pod and data directory you can exec into the pod with kubectl
to start a shell session.
$ kubectl exec -it hashicorp-consul-server-0 /bin/sh
This will allow you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul members.
$ consul members
Node Address Status Type Build Protocol DC Segment
hashicorp-consul-server-0 10.8.1.9:8301 alive server 1.6.1 2 hashidc1
hashicorp-consul-server-1 10.8.2.4:8301 alive server 1.6.1 2 hashidc1
hashicorp-consul-server-2 10.8.0.8:8301 alive server 1.6.1 2 hashidc1
ip-172-31-95-163.ec2.internal 10.8.0.7:8301 alive client 1.6.1 2 hashidc1
ip-172-31-81-128.ec2.internal 10.8.1.8:8301 alive client 1.6.1 2 hashidc1
ip-172-31-87-198.ec2.internal 10.8.2.3:8301 alive client 1.6.1 2 hashidc1
»Using Consul environment variables
You can also access the Consul datacenter with your local Consul binary by setting the environment variables documented here.
In this case, since you are exposing HTTP via the load balancer/UI service, you can export our CONSUL_HTTP_ADDR
to the same
external IP we used to access the UI above:
$ export CONSUL_HTTP_ADDR=http://104.198.132.100:80
You can now use your local installation of the Consul binary to run Consul commands:
$ consul members
Node Address Status Type Build Protocol DC Segment
hashicorp-consul-server-0 10.8.1.9:8301 alive server 1.6.1 2 hashidc1
hashicorp-consul-server-1 10.8.2.4:8301 alive server 1.6.1 2 hashidc1
hashicorp-consul-server-2 10.8.0.8:8301 alive server 1.6.1 2 hashidc1
ip-172-31-95-163.ec2.internal 10.8.0.7:8301 alive client 1.6.1 2 hashidc1
ip-172-31-81-128.ec2.internal 10.8.1.8:8301 alive client 1.6.1 2 hashidc1
ip-172-31-87-198.ec2.internal 10.8.2.3:8301 alive client 1.6.1 2 hashidc1
»Next steps
In this tutorial, you deployed a Consul datacenter to AWS Elastic Kubernetes Service using the official Helm chart. You also configured access to the Consul UI. To learn more about deployment best practices, review the Kubernetes Reference Architecture tutorial.