In this guide, you'll start a local Kubernetes cluster with Minikube. You will then deploy Consul with the official Helm chart. After deploying Consul, you will learn how to access the agents. You will then deploy two services that use Consul to discover each other and communicate over TLS via Consul Connect.
Security Warning This guide is not for production use. By default, the chart will install an insecure configuration of Consul. Please refer to the Kubernetes documentation to determine how you can secure Consul on Kubernetes in production. Additionally, it is highly recommended to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features.
First, you'll need to follow the directions for installing Minikube, including VirtualBox or a similar virtualization tool.
You'll also need to install
kubectl with Homebrew.
$ brew install kubernetes-cli
helm with Homebrew.
$ brew install kubernetes-helm
kubectl with Chocolatey.
$ choco install kubernetes-cli
helm with Chocolatey.
$ choco install kubernetes-helm
Start Minikube with the optional
--memory flag specifying the equivalent of 4-8GB of memory, so your pods will have plenty of resources to use. Starting Minikube may take several minutes. It will download a 100-300MB of dependencies and container images.
$ minikube start --memory 4096
Next, use the
minikube dashboard command to launch the local Kubernetes dashboard in a browser. Even if the previous step completed successfully, you may have to wait a minute or two for Minikube to be available. If you see an error, try again after a few minutes.
$ minikube dashboard
Once it's available, you'll see the dashboard in your web browser. You can view pods, nodes, and other resources.
»Install Consul with the official Helm chart
Tip: You can deploy a complete Consul datacenter using the official Helm chart. By default, the chart will install a three-server Consul datacenter on Kubernetes, with a Consul client on each Kubernetes node. You can review the official Helm chart values to learn more about the default settings.
»Download the demo code and Helm chart
First, add the HashiCorp Helm Chart repository:
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
»Create a custom values file
The chart comes with reasonable defaults, however, you will override a few values to help things go more smoothly with Minikube and enable useful features.
Create a custom values file called
helm-consul-values.yaml with the following contents. Name the Consul datacenter and then enable the following:
- Consul UI via a
- secure communication between pods with
Finally, you'll configure your deployment to only have one Consul server (suitable for local development).
# Choose an optional name for the datacenter global: datacenter: minidc # Enable the Consul Web UI via a NodePort ui: service: type: 'NodePort' # Enable Connect for secure communication between nodes connectInject: enabled: true client: enabled: true # Use only one Consul server for local development server: replicas: 1 bootstrapExpect: 1 disruptionBudget: enabled: true maxUnavailable: 0
»Deploy Consul on Minikube
helm install, providing your custom values file, the
hashicorp/consul chart, and a name for your Consul installation. It will print a list of all the resources that were created.
$ helm install -f helm-consul-values.yaml hashicorp hashicorp/consul
NAME: hashicorp LAST DEPLOYED: Wed Nov 13 09:47:39 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/DaemonSet NAME AGE hashicorp-consul 0s ==> v1/StatefulSet hashicorp-consul-server 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE hashicorp-consul-fqzsz 0/1 ContainerCreating 0 0s hashicorp-consul-connect-injector-webhook-deployment-5c7d4pvh5h 0/1 ContainerCreating 0 0s ==> v1/Service hashicorp-consul-connect-injector-svc 0s hashicorp-consul-dns 0s hashicorp-consul-server 0s hashicorp-consul-ui 0s
»Access the Consul UI
Verify Consul was deploy properly by accessing the Consul UI. Run
minikube service list to see your services. Find the one with
consul-ui in the name.
$ minikube service list
|-------------|----------------------------------------|-----------------------------| | NAMESPACE | NAME | URL | |-------------|----------------------------------------|-----------------------------| | default | hashicorp-consul-server | No node port | | default | hashicorp-consul-ui | http://192.168.99.100:31376 | | default | kubernetes | No node port | | kube-system | kube-dns | No node port | | kube-system | kubernetes-dashboard | No node port | | kube-system | tiller-deploy | No node port | |-------------|----------------------------------------|-----------------------------|
minikube service with the
consul-ui service name as the argument. It will open the service in your web browser.
$ minikube service hashicorp-consul-ui
You can now view the Consul UI with a list of Consul's services, nodes, and other resources. Currently, you should only see the
consul service listed.
»Access Consul with kubectl and the HTTP API
In addition to accessing Consul with the UI, you can manage Consul with the
HTTP API or by directly connecting to the pod with
»Use Kubectl to access the server
To access the pod and data directory you can exec into the pod with
kubectl to start a shell session.
First, get a list of all the Kubernetes pods.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE hashicorp-consul-connect-injector-webhook-deployment-5c7d4pvh5h 1/1 Running 0 2m40s hashicorp-consul-fqzsz 1/1 Running 0 2m40s hashicorp-consul-server-0 1/1 Running 0 2m39s
Next, connect to the server using
$ kubectl exec -it hashicorp-consul-server-0 /bin/sh
This will allow you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul version and member list.
$ consul version
Consul v1.6.1 Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
$ consul members
Node Address Status Type Build Protocol DC Segment hashicorp-consul-server-0 172.17.0.8:8301 alive server 1.6.1 2 kaitlin <all> minikube 172.17.0.2:8301 alive client 1.6.1 2 kaitlin <default>
»Consul HTTP API
You can use the Consul HTTP API by communicating with the local agent running on the Kubernetes node. Read the documentation to learn more about using the Consul HTTP API with Kubernetes.
»Deploy services with Kubernetes
Because you enabled the Connect injector in your
values.yaml file, all the services using Connect will automatically be registered in the Consul catalog.
»Deploy two services
Now you can deploy your services as a two-tier application made of a backend data service that returns a number (the counting service) and a frontend dashboard that pulls from the counting service over HTTP and displays the number.
Create a pod definition and service account for the counting service named
apiVersion: v1 kind: ServiceAccount metadata: name: counting --- apiVersion: v1 kind: Pod metadata: name: counting annotations: 'consul.hashicorp.com/connect-inject': 'true' spec: containers: - name: counting image: hashicorp/counting-service:0.0.2 ports: - containerPort: 9001 name: http serviceAccountName: counting
Next, create a pod definition and service account for the dashboard service and its load balancer named
apiVersion: v1 kind: ServiceAccount metadata: name: dashboard --- apiVersion: v1 kind: Pod metadata: name: dashboard labels: app: 'dashboard' annotations: 'consul.hashicorp.com/connect-inject': 'true' 'consul.hashicorp.com/connect-service-upstreams': 'counting:9001' spec: containers: - name: dashboard image: hashicorp/dashboard-service:0.0.4 ports: - containerPort: 9002 name: http env: - name: COUNTING_SERVICE_URL value: 'http://localhost:9001' serviceAccountName: dashboard --- apiVersion: 'v1' kind: 'Service' metadata: name: 'dashboard-service-load-balancer' namespace: 'default' labels: app: 'dashboard' spec: ports: - protocol: 'TCP' port: 80 targetPort: 9002 selector: app: 'dashboard' type: 'LoadBalancer' loadBalancerIP: ''
kubectl to deploy the counting service.
$ kubectl create -f counting.yaml
serviceaccount/counting created pod/counting created
kubectl to deploy the dashboard service.
$ kubectl create -f dashboard.yaml
serviceaccount/dashboard created pod/dashboard created service/dashboard-service-load-balancer created
To verify the services were deployed, refresh the Consul UI until you see that the
dashboard services and their proxies are running.
»View the dashboard
To view the dashboard, forward the pod's port where the dashboard service is running to your local machine on the same port by providing the pod name (
dashboard), which you specified in the service definition YAML file.
$ kubectl port-forward dashboard 9002:9002
Forwarding from 127.0.0.1:9002 -> 9002 Forwarding from [::1]:9002 -> 9002
Visit http://localhost:9002 in your web browser. It will display the dashboard service running in a Kubernetes pod, with a number retrieved from the
counting service using Consul service discovery, and transmitted securely over the network with mutual TLS via an Envoy proxy.
»Secure service communication with Consul Connect
Consul Connect secures service-to-service communication with authorization and encryption. Applications can use sidecar proxies to automatically establish mutual TLS connections for inbound and outbound network traffic without being aware of Connect at all.
Connect "intentions" provide you the ability to control which services are allowed to communicate. Next, you will use intentions to test the communication between the dashboard and counting services.
»Create an intention that denies communication
Connect to the Consul server using
kubectl. Then use
the Consul CLI to create an intention to prevent the dashboard service from reaching its upstream counting service using the
$ kubectl exec -it hashicorp-consul-server-0 /bin/sh
$ consul intention create -deny dashboard counting
Created: dashboard => counting (deny)
Verify the services are no longer allowed to communicate by returning to the dashboard UI. The service will display a message that the "Counting Service is Unreachable" and the count will display as "-1".
You can use
consul intention create to create "deny" and "allow" rules.
»Allow the Application Dashboard to Connect to the Counting Service
Finally, remove the intention so that the services can communicate again.
$ consul intention delete dashboard counting
These action does not require a reboot. It takes effect so quickly that by the time you visit the application dashboard, you'll see that it's successfully communicating with the backend counting service again.
»Extend your knowledge
»Rolling updates to Consul
While running Consul you may want to make configuration and deployment updates. You can use
helm upgrade to increase the number of agents, enable additional features, or upgrade the Consul version. You can practice using
helm upgrade by updating your custom values file to enable
syncCatalog: enabled: true
The catalog sync feature allows Consul to discover services deployed in Kubernetes, without an operator creating Consul registration files. With catalog sync and Connect inject enabled, you will no longer need to manually register services when they are hosted on a Kubernetes node.
-f flag that passes in your new values file.
$ helm upgrade hashicorp -f demo-consul-101/k8s/helm-consul-values.yaml hashicorp/consul
Release "hashicorp" has been upgraded. Happy Helming! LAST DEPLOYED: Wed Nov 13 10:07:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Service hashicorp-consul-connect-injector-svc 19m hashicorp-consul-dns 19m hashicorp-consul-server 19m hashicorp-consul-ui 19m ==> v1/Deployment hashicorp-consul-connect-injector-webhook-deployment 19m hashicorp-consul-sync-catalog 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE hashicorp-consul-fqzsz 1/1 Running 0 19m hashicorp-consul-connect-injector-webhook-deployment-5c7d4pvh5h 1/1 Running 0 19m hashicorp-consul-sync-catalog-769c859849-fnmh8 0/1 ContainerCreating 0 0s hashicorp-consul-server-0
Notice you can now see the basic Kubernetes services in the Consul UI.
To Learn more about deploying Consul on a full Kubernetes cluster, review the production deployment guide.