In this guide you will deploy a Consul datacenter to Google Kubernetes Engine(GKE) on Google Cloud Platform(GCP) with HashiCorp’s official Helm chart. You do not need to update any values in the Helm chart for a basic installation. However, we will be creating a values file with parameters to allow access to the Consul UI.
Security Warning This guide is not for production use. By default, the chart will install an insecure configuration of Consul. Please refer to the Kubernetes documentation to determine how you can secure Consul on Kubernetes in production. Additionally, it is highly recommended to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features.
Prerequisites
Installing gcloud, kubectl, and helm CLI Tools
To follow this guide, you will need the Google Cloud SDK(gcloud), as well as kubectl and helm.
Reference the following instruction for setting up the Google Cloud SDK as well as general documentation:
To initialize the Google command-line tool to use the Google Cloud SDK, you can use gcloud init
.
$ gcloud init
Reference the following instructions for download link:
Installing helm and kubectl with Homebrew on MacOS
Homebrew allows you to quickly install both Helm and kubectl
on MacOS.
$ brew install kubernetes-cli
$ brew install kubernetes-helm
Service Account Authentication (optional)
You should create a GCP IAM service account and authenticate with it on the command line.
- To review the GCP IAM service account documentation, go here
- To interact with GCP IAM service accounts, go here
Once you have obtained your GCP IAM service account key-file
, you can authenticate your local gcloud cli by running the following:
$ gcloud auth activate-service-account --key-file="<path-to/my-consul-service-account.json>"
Create a Kubernetes Cluster
Review the GCP documentation for creating and administering a Kubernetes cluster within GCP. Note, for a quick start, you can also easily create a GKE cluster from the GCP console by clicking "Create Cluster", using the defaults, and clicking "Create."
Configure kubectl to Talk to Your Cluster
From the GCP console, where you previously created your cluster, click the "Connect" button. Copy the snippet provided and paste it into your terminal.
$ gcloud container clusters get-credentials my-consul-cluster --zone us-west1-b --project my-project
You can then run kubectl cluster-info to verify you are connected to your Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://<your GKE ip(s)>
GLBCDefaultBackend is running at https://<your GKE ip(s)>/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://<your GKE ip(s)>/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://<your GKE ip(s)>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://<your GKE ip(s)>/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use kubectl cluster-info dump
.
Deploy Consul
You can deploy a complete Consul datacenter using the official Helm chart. By default, the chart will install three Consul servers and one client per Kubernetes nodes in your GKE cluster. You can review the Helm chart values to learn more about the default settings.
Download the Helm Chart
First, you will need to clone the official Helm chart from HashiCorp's GitHub repo.
$ git clone https://github.com/hashicorp/consul-helm.git
For testing, it is not necessary to update the Helm chart before deploying Consul, it comes with reasonable defaults. However, for this guide, you will update several values to customize the installation and enable access to the UI. Review the Helm chart documentation to learn more about the chart.
Creating a Values File
To customize your deployment, you can pass a yaml to be used during the deployment, it will override the Helm chart's defaults. The following values changes your datacenter name, enables the Consul UI via a service, and enables the syncCatalog feature.
# helm-consul-values.yaml
global:
datacenter: hashidc1
ui:
service:
type: "LoadBalancer"
syncCatalog:
enabled: true
Install Consul with Helm
Helm 2
If using Helm 2, you need to install and configure Tiller. If using Helm 3, skip ahead to the next section.
$ helm init
$HELM_HOME has been configured at ~/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
Create a Kubernetes service account called "tiller":
$ kubectl --namespace kube-system create serviceaccount tiller
serviceaccount/tiller created
Next, create a Kubernetes clusterrolebinding
between the cluster-admin role and the tiller service account. You do not need to customize the following command:
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
Then, patch your kube-system namespace to respect the tiller service account:
$ kubectl --namespace kube-system patch deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Helm Install
Now, you can deploy Consul using helm install
. We will be passing in the values file we created above and the location of our helm chart.
We recommend verifying your install/upgrade with --dry-run
prior to your actual run.
$ helm install -f helm-consul-values.yaml hashicorp ./consul-helm
For Helm 2, run helm install -f helm-consul-values.yaml --name hashicorp ./consul-helm
The output of helm install will show you the details of your Consul deployment, but you can also use kubectl get pods to verify your cluster:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hashicorp-consul-fmd8f 0/1 Running 0 26s
hashicorp-consul-mvkh8 0/1 Running 0 26s
hashicorp-consul-ngkss 0/1 Running 0 26s
hashicorp-consul-server-0 0/1 Running 0 26s
hashicorp-consul-server-1 0/1 Running 0 26s
hashicorp-consul-server-2 0/1 Running 0 26s
hashicorp-consul-sync-catalog-6bc5f86c85-fqjhs 0/1 Running 0 26s
Accessing the Consul UI
Since you enabled the Consul UI in your values file, you can run kubectl get services
to find the external IP of your UI service.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.consul <none> 4s
hashicorp-consul-dns ClusterIP 10.12.8.3 <none> 53/TCP,53/UDP 63s
hashicorp-consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 63s
hashicorp-consul-ui LoadBalancer 10.12.6.197 104.198.132.100 80:30037/TCP 63s
kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 77m
You can see above that, in this case, the UI is exposed at 104.198.132.100
over port 80. Navigate to the external IP in your browser to interact with the Consul UI:
Accessing Consul with the CLI and API
In addition to accessing Consul with the UI, you can manage Consul with the
HTTP API or by directly connecting to the pod with kubectl
.
Consul HTTP API
You can use the Consul HTTP API by communicating to the local agent running on the Kubernetes node. You can read the documentation if you are interested in learning more about using the Consul HTTP API with Kubernetes.
Kubectl
To access the pod and data directory you can exec into the pod with kubectl
to start a shell session.
$ kubectl exec -it hashicorp-consul-server-0 /bin/sh
This will allow you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul members.
$ consul members
Node Address Status Type Build Protocol DC Segment
hashicorp-consul-server-0 10.8.1.9:8301 alive server 1.6.1 2 hashidc1 <all>
hashicorp-consul-server-1 10.8.2.4:8301 alive server 1.6.1 2 hashidc1 <all>
hashicorp-consul-server-2 10.8.0.8:8301 alive server 1.6.1 2 hashidc1 <all>
gke-standard-cluster-1-default-pool-60f986c7-19nq 10.8.0.7:8301 alive client 1.6.1 2 hashidc1 <default>
gke-standard-cluster-1-default-pool-60f986c7-q7mn 10.8.1.8:8301 alive client 1.6.1 2 hashidc1 <default>
gke-standard-cluster-1-default-pool-60f986c7-xwz6 10.8.2.3:8301 alive client 1.6.1 2 hashidc1 <default>
Using Consul Environment Variables
You can also access the Consul datacenter with your local Consul binary by setting the environment variables documented here.
In this case, since you are exposing HTTP via the load balancer/UI service, you can export our CONSUL_HTTP_ADDR
to the same
external IP we used to access the UI above:
$ export CONSUL_HTTP_ADDR=http://104.198.132.100:80
You can now use your local installation of the Consul binary to run Consul commands:
$ consul members
Node Address Status Type Build Protocol DC Segment
hashicorp-consul-server-0 10.8.1.9:8301 alive server 1.6.1 2 hashidc1 <all>
hashicorp-consul-server-1 10.8.2.4:8301 alive server 1.6.1 2 hashidc1 <all>
hashicorp-consul-server-2 10.8.0.8:8301 alive server 1.6.1 2 hashidc1 <all>
gke-standard-cluster-1-default-pool-60f986c7-19nq 10.8.0.7:8301 alive client 1.6.1 2 hashidc1 <default>
gke-standard-cluster-1-default-pool-60f986c7-q7mn 10.8.1.8:8301 alive client 1.6.1 2 hashidc1 <default>
gke-standard-cluster-1-default-pool-60f986c7-xwz6 10.8.2.3:8301 alive client 1.6.1 2 hashidc1 <default>
Summary
In this guide, you deployed a Consul datacenter to Google Kubernetes Engine using the official Helm chart. You also configured access to the Consul UI. To learn more about deployment best practices, review the Kubernetes Reference Architecture guide.