Consul provides a unified solution for both managing and monitoring the traffic and services you deploy on your journey from monolith to microservices.
Consul advances the art of network observability significantly. By deploying the Consul control plane along with sidecar proxies for each service, you instantly gain access to valuable metrics about your traffic. The sidecar proxies are capable of collecting Layer 7 (L7) metrics, such as HTTP status codes or request latency from your services. With a small amount of configuration, this data can then be exported to monitoring tools like Prometheus, Grafana, or Jaeger.
In addition to enhanced network observability, Consul provides several options for traffic shaping in your network. Employing modern deployment and testing techniques, such as Canary deployments or A/B testing, increases the complexity of network management and generates new security issues. The level of effort required when using traditional networking solutions may have previously made these goals simply unobtainable. With Consul's comprehensive traffic shaping features, you now have a modern toolkit designed to support these exact use cases, as well as others.
»Prerequisites
To experiment with the features discussed in this tutorial, you will need a Kubernetes cluster with Consul service mesh installed. In case you do not have one and are interested in testing the use cases you can follow the Understand Consul service mesh tutorial to deploy a local Kubernetes cluster and install Consul service mesh.
»Choose a proxy as data plane
Once deployed, Consul service mesh becomes your control plane and allows you to choose between multiple proxies as the data plane.
Consul service mesh has first class support for Envoy as a proxy. Consul can configure Envoy sidecars to proxy
http/1.1
,http2
, orgRPC
traffic at L7 or any other TCP-based protocol at L4. Consul also permits you to inject custom Envoy configuration in the proxy service definition, thus allowing you to use features of Envoy that might not yet be exposed by Consul. Envoy is the default proxy used by the official Consul Helm chart.Consul includes its own built-in Layer 4 (L4) proxy for testing and development with Consul. The built-in proxy is useful when testing configuration and when checking mTLS or intentions, but it doesn't have the L7 capabilities necessary for the observability and traffic shaping features available in Consul versions 1.5 and higher. Since you will be using these features in this tutorial, we recommend that you not override the default Envoy image specified by the official Consul Helm chart.
Any proxy can be extended to support Consul service mesh as long as it is able to
accept inbound connections and/or establish outbound connections identified as a
particular service. Consul exposes the /v1/agent/connect/*
API endpoints that
permit a proxy to validate certificates for the mTLS connection and authorize or
deny it based on the Consul configuration.
»Observability with sidecar proxies
If the proxy you use as your data plane exposes L7 metrics, Consul permits you to configure the metrics destination and service protocol you want to use to monitor and aggregate the results in your monitoring pipeline.
Starting with version 1.5, Consul is able to configure Envoy proxies to collect L7 metrics including HTTP status codes and request latency, along with many others, and export those to monitoring tools like Prometheus. If you are using Kubernetes, the Consul official Helm chart can simplify much of the necessary configuration, which you can learn about in the L7 observability tutorial.
»Create service defaults with central configuration
Central configuration entries can be created to provide cluster-wide defaults for various aspects of your service mesh, including proxies. When central configuration is enabled during the Helm installation, the Consul agent on each node will look for service configuration defaults that match the registering service instance. If it finds any, the agent will merge those defaults with the service instance configuration. This allows for things like service protocol or proxy configuration to be defined globally and inherited by any affected service registrations.
»Understand L7 traffic management
In addition to helping you improve observability for your infrastructure, Consul service mesh offers a flexible and comprehensive set of traffic management features. Now, employing techniques such as canary deployments, A/B testing, or blue/green deployments is a completely obtainable goal.
Building on the service catalog and health checks as a foundation, Consul service mesh provides, three additional stages to the traffic management pipeline. This approach allows you to manage a datacenter's pool of services far beyond simple load balancing based on health. Consul gives you a set of tools that do not limit you to the resolution of a single service by name. With Consul, you can implement fine grained traffic shaping by service version, HTTP header, path prefix, query string, and more.
The three additional states of the service discovery pipeline are:
- Routing
- Splitting
- Resolution
Each stage manages a different aspect of the L7 traffic management story and can
be configured to work in concert. Each stage of the discovery pipeline can be dynamically
configured via Consul configuration entries. Configuration entries are a Consul specific
concept. Configuration entries allow you to define service mesh settings that either apply
to all services and proxies in the mesh, or are scoped to a single service. Configuration
entries can even be defined in a way that targets only a subset of a service, such as a
specific version in scenarios where multiple version of the same service may be running.
If you don't provide a configuration for a stage of the pipeline for a given service,
Consul falls back to its default behavior. All of the following config entries can be
managed natively as CRDs using kubectl
.
»Routing
Routing is the first stage of the L7 traffic management pipeline and allows the
interception of traffic using L7 criteria such as path prefixes or http headers.
To inject user defined routing rules into your service mesh, you must register a
service-router
configuration entry with the Consul control plane. When a request matching the criteria
specified by a service-router
configuration entry is made, Consul re-routes the traffic
to a different service or service subset as specified by the configuration entry.
A service-router
configuration entry kind may only reference service-splitter
or service-resolver
entries. The following is an example of a service-router
configuration entry that re-routes traffic that matches on the /coffees
pathPrefix
and re-routes traffic away from the target service to a new, route-specific service, coffee-service
.
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceRouter
metadata:
name: product-api
spec:
routes:
- match:
http:
pathPrefix: "/coffees"
destination:
service: "coffee-service"
»Splitting
Splitting is the second stage of the L7 traffic management pipeline. To inject
your own splitting rules into your service mesh, you must register a
service-router
config entry with the Consul control plane. A splitter configuration entry allows
you to choose to split incoming requests across different subsets of a single
service (like during staged canary rollouts), or perhaps across different services
(like during a v2 rewrite or other type of codebase migration).
A service-splitter
configuration may only reference other service-splitter
entries
or a service-resolver
entry. The following is an example of a service-splitter
configuration entry that splits traffic destined for the coffee-service
service
across two different versions of the service. A configuration entry such as this
can be registered with Consul to perform a controlled rollout of a new service version.
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
name: coffee-service
spec:
splits:
- weight: 90
serviceSubset: v1
- weight: 10
serviceSubset: v2
»Resolution
Resolution is the final state of the L7 traffic management pipeline. To inject
your own splitting rules into your service mesh, you must register a
service-resolver
config entry with the Consul control plane. A resolver configuration entry allows
for a user to define which instances of a service should satisfy discovery requests
for the provided name.
These configuration entries may only reference other service-resolver
entries.
Examples of things you can do with resolver configuration entries:
- Control where to send traffic if all instances of
api
service in the current datacenter are unhealthy. - Configure service subsets based on
Service.Meta.version
values. - Send all traffic for
web
that does not specify a service subset to the version1 subset. - Send all traffic for
api
tonew-api
. - Send all traffic for
api
in all datacenters to instances ofapi
indc2
. - Create a "virtual service"
api-dc2
that sends traffic to instances ofapi
indc2
. This can be referenced in upstreams or in other configuration entries.
If no resolver configuration is defined for a service it is assumed 100% of traffic flows to the healthy instances of a service with the same name in the current datacenter/namespace and discovery terminates.
The following is an example of a service-resolver
configuration entry that
defines two subsets of a service based on service registration metadata. This
type of configuration entry works in concert with other configuration entries
that reference a serviceSubset
. The service-resolver
configuration entry
kind has a powerful filtering mechanism that enables highly flexible service
subset targeting.
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: coffee-service
spec:
defaultSubset: v1
subsets:
v1:
filter: "Service.Meta.version == v1"
v2:
filter: "Service.Meta.version == v2"
Note: service-resolver configuration entry kinds function at L4 (unlike service-router and service-splitter kinds). These can be created for services of any protocol such as tcp.
»Challenge: Deploy more services in your mesh
In Secure Applications with Consul Service Mesh
you used the api.yml
and web.yml
to deploy two services into your Consul service
mesh. After completing this collection try deploying your own services and creating
intentions for them.
»Next steps
The service mesh functionalities you used in this collection are built on top of Consul service discovery, health monitoring, and the availability of the service catalog. Consul is already able to scale to thousands of nodes when used for service discovery and that makes it an excellent primitive for these additional service mesh functionalities.
Continue building your hands-on experience by setting up a metrics pipeline with Prometheus, Grafana, and Kubernetes in the layer 7 observability interactive tutorial.