Consul service mesh deploys an Envoy sidecar proxy alongside each service instance in a datacenter. The sidecar proxy brokers traffic between the local service instance and other services registered with Consul. The proxy is aware of all traffic that passes through it. In addition to securing inter-service communication, the proxy can also collect and expose data about the service instance. Starting with version 1.5, Consul service mesh is able to configure Envoy to expose layer 7 metrics, such as HTTP status codes or request latency, to monitoring tools like Prometheus.
In this tutorial, you will:
- Configure Consul to expose Envoy metrics to Prometheus
- Deploy Consul using the official helm chart.
- Deploy Prometheus and Grafana using their official Helm charts.
- Deploy a multi-tier demo application that is configured to be scraped by Prometheus.
- Start a traffic simulation deployment, and observe the application traffic in Grafana.
Tip: While this tutorial shows you how to deploy a metrics pipeline on Kubernetes, all the technologies the tutorial uses are platform agnostic; Kubernetes is not necessary to collect and visualize layer 7 metrics with Consul service mesh.
»Prerequisites
If you already have a Kubernetes cluster running with Helm and kubectl installed, you can start on the tutorial right away. If not, set up a Kubernetes cluster using your favorite method that supports persistent volume claims, or install and start Minikube or kind.
- Kubernetes v1.18.2
- Minikube v1.10.1
- kind v0.8.1
- Helm v3.2.1
- Consul 1.9
- consul-helm 0.27.0
If you use Minikube, you may want to start it with a little bit of extra memory.
$ minikube start --memory 4096 --kubernetes-version v1.18.2
You must also install kubectl, and both install and initialize Helm.
Also, ensure you have the latest helm charts for Consul, Prometheus, and Grafana.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && \
helm repo add grafana https://grafana.github.io/helm-charts && \
helm repo add hashicorp https://helm.releases.hashicorp.com && \
helm repo update
Next, clone the GitHub repository that contains the files you'll use with this tutorial.
$ git clone https://github.com/hashicorp/learn-consul-kubernetes.git
Now, change directories into the repository you just cloned.
$ cd learn-consul-kubernetes
Now, checkout the tagged version verified for this tutorial.
git checkout tags/v0.0.3
We'll refer to this directory as your working directory, and you'll run the rest of the commands in this tutorial from this directory.
»Deploy Consul service mesh using Helm
Once you have set up the prerequisites, you're ready to install Consul.
Open the file in your working directory called layer7-observability/helm/consul-values.yaml
and
review the settings. You can override many of the values in Consul's values file
using annotations on specific services.
# name your datacenter
global:
name: consul
datacenter: l7
server:
# use 1 server
replicas: 1
bootstrapExpect: 1
disruptionBudget:
enabled: true
maxUnavailable: 0
extraConfig: |
{
"ui_config": {
"enabled": true,
"metrics_provider": "prometheus",
"metrics_proxy": {
"base_url": "http://prometheus-server"
}
}
}
client:
enabled: true
# enable gRPC on your client to support Consul service mesh
grpc: true
ui:
enabled: true
connectInject:
enabled: true
# inject an envoy sidecar into every new pod, except for those with annotations that prevent injection
default: true
# these settings enable L7 metrics collection and are new in 1.5
centralConfig:
enabled: true
# set the default protocol (can be overwritten with annotations)
defaultProtocol: 'http'
# proxyDefaults is a raw json string that will be applied to all Connect
# proxy sidecar pods that can include any valid configuration for the
# configured proxy.
proxyDefaults: |
{
"envoy_prometheus_bind_addr": "0.0.0.0:9102"
}
controller:
enabled: true
Note the extraConfig
and proxyDefaults
entries. The extraConfig
entry injects the required Consul settings to enable the Prometheus metrics provider. The proxyDefaults
entry injects a proxy-defaults Consul configuration entry
for the envoy_prometheus_bind_addr setting that is applied to all Envoy proxy instances.
Consul then uses that setting to configure where Envoy will publish Prometheus metrics.
This is important because you will need to annotate your pods with this port so that Prometheus
can scrape them. We will cover this in more detail later in the tutorial.
Warning: By default, the chart will install an insecure configuration of Consul. This provides a less complicated out-of-box experience for new users but is not appropriate for a production setup. Review the Secure Consul and Registered Services on Kubernetes tutorial for instructions on how to secure your datacenter for production.
Now install Consul in your Kubernetes cluster and give Kubernetes a name for your Consul installation.
$ helm install -f layer7-observability/helm/consul-values.yaml consul hashicorp/consul --version "0.27.0" --wait
The output will be a list of all the Kubernetes resources created. The output has been abbreviated, but will look similar to the following.
NAME: consul
LAST DEPLOYED: Thu Aug 13 07:52:03 2020
NAMESPACE: default
STATUS: deployed
...
$ helm status consul
$ helm get all consul
Check that Consul is running in your Kubernetes cluster using kubectl
.
$ watch kubectl get pods
Consul setup is complete when all pods have a status of Running
, as
illustrated in the following output.
NAME READY STATUS RESTARTS AGE
consul-9p9jf 1/1 Running 0 31m
consul-connect-injector-webhook-deployment-b797f6fd4-2cwvm 1/1 Running 0 31m
consul-server-0 1/1 Running 0 31m
Leave the watch running and open a new terminal.
»Deploy the metrics pipeline
Consul service mesh can integrate with a variety of metrics tooling, but in this tutorial, you will use Prometheus and Grafana to collect and visualize metrics.
»Deploy Prometheus with Helm
Install the official Prometheus Helm chart using the values in
layer7-observability/helm/prometheus-values.yaml
.
$ helm install -f layer7-observability/helm/prometheus-values.yaml prometheus prometheus-community/prometheus --version "11.7.0" --wait
The output has been abbreviated, but will look similar to the following.
NAME: prometheus
LAST DEPLOYED: Thu Aug 13 07:52:32 2020
NAMESPACE: default
STATUS: deployed
...
For more information on running Prometheus, visit:
https://prometheus.io/
Switch back to the terminal where the kubectl
watch is running. Prometheus setup
is complete when all pods have a status of Running
, as illustrated in the following
output.
consul-9p9jf 1/1 Running 0 43m
consul-connect-injector-webhook-deployment-b797f6fd4-2cwvm 1/1 Running 0 43m
consul-server-0 1/1 Running 0 43m
prometheus-kube-state-metrics-c65b87574-4zjb7 3/3 Running 0 42m
prometheus-node-exporter-cq7lb 3/3 Running 0 42m
prometheus-pushgateway-697b8c46cd-rjq4q 3/3 Running 0 42m
prometheus-server-64c7484778-hwtmg 4/4 Running 0 42m
»Deploy Grafana with Helm
Installing Grafana will follow a similar process. Install the
official Grafana Helm chart using the values in
layer7-observability/helm/grafana-values.yaml
. This configuration will tell Grafana
to use Prometheus as a datasource, and set the admin password to
password
.
$ helm install -f layer7-observability/helm/grafana-values.yaml grafana grafana/grafana --version "5.3.6" --wait
The output has been abbreviated, but will look similar to the following.
NAME: grafana
LAST DEPLOYED: Thu Aug 13 07:53:43 2020
NAMESPACE: default
STATUS: deployed
...
#################################################################################
###### WARNING: Persistence is disabled!!! You will lose your data when #####
###### the Grafana pod is terminated. #####
#################################################################################
Switch back to the terminal where the kubectl
watch is running. Grafana setup
is complete when the Grafana pod has a status of Running
, as illustrated in the following
output.
NAME READY STATUS RESTARTS AGE
consul-9p9jf 1/1 Running 0 117m
consul-connect-injector-webhook-deployment-b797f6fd4-2cwvm 1/1 Running 0 117m
consul-server-0 1/1 Running 0 117m
grafana-7d6f454b75-pb687 3/3 Running 0 115m
prometheus-kube-state-metrics-c65b87574-4zjb7 3/3 Running 0 117m
prometheus-node-exporter-cq7lb 3/3 Running 0 117m
prometheus-pushgateway-697b8c46cd-rjq4q 3/3 Running 0 117m
prometheus-server-64c7484778-hwtmg 4/4 Running 0 117m
The output includes shell-specific instructions to access your Grafana UI and
admin password. You specified the admin password in the layer7-observability/helm/grafana-values.yaml
file as password
. To expose the Grafana UI outside the cluster, issue
the following command.
$ export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") && \
kubectl --namespace default port-forward $POD_NAME 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Leave this port-forward session active so that you can visit the UI again later once metrics are being collected.
Navigate to http://localhost:3000
in a browser tab and log in to the Grafana UI.
Once you have logged into the Grafana UI, hover over the dashboards icon (four
squares in the left hand menu) and then click the "Manage" option.
This will take you to a page that gives you some choices about how to upload
Grafana dashboards. Click the "Import" button on the right hand side of the
screen. Open the file called layer7-observability/grafana/hashicups-dashboard.json
and copy the contents into the JSON window of the Grafana UI. Click through the
rest of the options, and you will end up with a dashboard waiting for data to display.
»Deploy a demo application on Kubernetes
Now that your monitoring pipeline is set up, deploy a demo application that will generate data. We will be using HashiCups, an application that emulates an online order app for a coffee shop. For this tutorial, the HashiCups application includes a React front end, a GraphQL API, a REST API and a Postgres database.
All the files defining HashiCups are in the layer7-observability/hashicups
directory.
Open the layer7-observability/hashicups/frontend.yaml
file. Notice that the Deployment has been annotated with
the following Prometheus configuration.
prometheus.io/scrape: 'true'
prometheus.io/port: '9102'
All of the other resources have been similarly annotated. This allows
Prometheus to discover resources in the Kubernetes cluster that should
be exposing metrics, and tells Prometheus what port to the metrics are
exposed at. Recall that earlier in the tutorial you configured the
proxyDefaults
entry in the consul-values.yaml file, and set the
envoy_prometheus_bind_addr
to 0.0.0.0:9102
. By setting this annotation
on each resource, you are configuring the other side of the feedback
loop. It may be helpful to think of this as a publish/subscribe
pattern, where you told Consul to configure Envoy to publish metrics
on port 9102
, and now have told Prometheus to subscribe on each proxy
at port 9102
. You will verify this configuration is applied later in
the tutorial.
Open a new terminal, and deploy the demo application.
$ kubectl apply -f layer7-observability/hashicups
service/frontend-service created
configmap/nginx-configmap created
deployment.apps/frontend created
service/products-api-service created
serviceaccount/products-api created
configmap/db-configmap created
deployment.apps/products-api created
service/postgres created
deployment.apps/postgres created
service/public-api-service created
deployment.apps/public-api created
HashiCups will take a few minutes to deploy. Switch back to the terminal
where the kubectl
watch is running. Grafana setup is complete when all
pods have a status of Running
, as illustrated in the following output.
NAME READY STATUS RESTARTS AGE
consul-9p9jf 1/1 Running 0 122m
consul-connect-injector-webhook-deployment-b797f6fd4-2cwvm 1/1 Running 0 122m
consul-server-0 1/1 Running 0 122m
frontend-6454bd57f-v958p 3/3 Running 0 120m
grafana-7d6f454b75-pb687 3/3 Running 0 120m
postgres-55899dddd-5z4hx 3/3 Running 0 120m
products-api-9cd5bbb69-j8x69 3/3 Running 0 120m
prometheus-kube-state-metrics-c65b87574-4zjb7 3/3 Running 0 122m
prometheus-node-exporter-cq7lb 3/3 Running 0 122m
prometheus-pushgateway-697b8c46cd-rjq4q 3/3 Running 0 122m
prometheus-server-64c7484778-hwtmg 4/4 Running 0 122m
public-api-5f698b886-rfnh7 3/3 Running 0 120m
Test the application by viewing the React front end. You can do this by forwarding the frontend deployment's port 80 to your development host.
$ kubectl port-forward deploy/frontend 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Navigate to http://localhost:8080
in a browser window. You should observe the following screen.
Now that the application is running let's verify that Consul did, in fact,
configure Envoy to publish metrics at port 9102
. The Envoy side-car proxy
can be reached at port 19000
. Open a new terminal, and issue the following
command.
$ kubectl port-forward deploy/frontend 19000:19000
Forwarding from 127.0.0.1:19000 -> 19000
Forwarding from [::1]:19000 -> 19000
Navigate to http://localhost:19000/config_dump
in a browser window. You should
see what looks like a raw JSON document dumped to the screen. This is the Envoy
configuration. Search for 9102
and you should find two different stanzas that
reference this port. One of them is included next for reference.
{
"name": "envoy_prometheus_metrics_listener",
"address": {
"socket_address": {
"address": "0.0.0.0",
"port_value": 9102
}
}
Notice that the configuration matches the values you specified in the
proxyDefaults
stanza in the layer7-observability/helm/consul-values.yaml
file.
This confirms that Consul has configured Envoy to publish Prometheus metrics.
Enter CTRL-C to stop the port-forward
session from the side-car proxy. You
will not need to reference it again for the remainder of the tutorial.
»Visualize application metrics
While Grafana is optimized for at a glance observability, the Prometheus UI can be useful as well. Issue the following command to expose the Prometheus UI to your development host.
$ kubectl port-forward deploy/prometheus-server 9090:9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090
»Discover available metrics with Prometheus
Navigate to http://localhost:9090
in a browser window, and you should observe the default
Prometheus UI. In the textbox at the top of the screen paste sum by(__name__)({app="product-api"})
and then click the button labeled "Execute". Your screen will now look similar
to the following.
You have now performed a PromQL
query that will list all available metrics for resources that have the product-api
label.
In this case, that is the REST API resource you deployed from the app
folder. This list of metrics
can be used as constraints for further PromQL queries both here and in the Grafana UI.
Next, from within the Prometheus UI, click the dropdown menu item labeled "Status" in the menu bar at the top of the screen, and click on the option labeled "Targets". This will navigate the UI to the Targets screen where you can review all of the resources that Prometheus is monitoring. The Prometheus Helm chart you installed earlier is configured to monitor all of the Kubernetes infrastructure as well as Prometheus itself.
Click on the button labeled "show less" next to each section header except the one labeled "kubernetes-pods". Your screen should appear similar to the following.
Notice that four pods are running, and have a State of "UP". Look at the first label
for each pod in the Label column. You should have entries for app="frontend"
, app="postgres"
,
app="products-api"
, and app="public-api"
. This confirms that Prometheus is collecting
metrics from all of your annotated resources, and that they are all up and running.
Go back to the terminal where your port-forward
session for the Prometheus UI is running,
and type CTRL-C to end the session. You will not need to access the Prometheus UI for
the remainder of the tutorial.
»Simulate traffic
Now that you know the application is running, start generating some load so that you will have some metrics to look at in Grafana.
$ kubectl apply -f layer7-observability/traffic.yaml
configmap/k6-configmap created
deployment.apps/traffic created
Envoy exposes a huge number of metrics. Which metrics are important to monitor will depend on your application. For this tutorial we have preconfigured a HashiCups-specific Grafana dashboard with a couple of basic metrics, but you should systematically consider what others you will need to collect as you move from testing into production.
»View Grafana dashboard
Now that you have metrics flowing through your pipeline, and a traffic simulation deployment running, navigate back to your Grafana dashboard in the Grafana UI, and you should observe a screen similar to the following.
Notice that once you started the traffic simulation deployment Prometheus started to log active connections. This dashboard is simplistic, but illustrates that metrics are flowing through the pipeline, and should give you a reasonable starting point for setting up your own observability tooling.
»Viewing traffic in Consul UI
You can also view service metrics in the Consul UI. In a new terminal session, issue the following command to expose the Consul UI to the development host.
$ kubectl port-forward consul-server-0 8500:8500
Forwarding from 127.0.0.1:8500 -> 8500
Forwarding from [::1]:8500 -> 8500
Open localhost:8500
in a new browser tab, and navigate
to the "Services" screen. Select the postgres
service. You should observe that
a chart with some basic metrics is embedded in the service tile.
Hover over the four tile elements to get tooltip descriptions of the different metrics.
You can also hover the timeline chart embedded in the tile, and review additional metrics for any point in time during the last 15 minutes.
»Clean up
If you've been using Minikube, you can tear down your environment by issuing the following command.
$ minikube delete
If you want to get rid of the configuration files and Consul Helm chart, recursively remove the learn-consul-kubernetes
directory.
`
$ cd .. && rm -rf learn-consul-kubernetes
»Next steps
In this tutorial, you set up layer 7 metrics collection and visualization in a Kubernetes cluster using Consul service mesh, Prometheus, and Grafana, all deployed via Helm charts. Specifically, you:
- Configured Consul and Envoy to expose application metrics to Prometheus.
- Deployed Consul using the official helm chart.
- Deployed Prometheus and Grafana using their official Helm charts.
- Deployed a multi-tier demo application that was configured to be scraped by Prometheus.
- Started a traffic simulation deployment, and observed the metrics in Prometheus and Grafana.
Because all of these programs can run outside of Kubernetes, you can set this pipeline up in any environment or collect metrics from workloads running on mixed infrastructure.
To learn more about the configuration options in Consul that enable layer 7 metrics collection with or without Kubernetes, refer to our documentation. For more information on centrally configuring Consul, take a look at the centralized configuration documentation.