Virtual Event
Join us for the next HashiConf Digital October 12-15, 2020 Register for Free

Get Started with Consul Service Mesh on Kubernetes

Observe Layer 7 Traffic with Consul Service Mesh

In the process of migrating from monolithic architecture to microservices, there are several best practices that can help you keep a tighter control over the status of your infrastructure. Consul gives you a unified solution to both managing and monitoring the traffic and services you have deployed.

You can get valuable metrics about your traffic by deploying sidecar proxies. The sidecar proxies are capable of collecting Layer 7 (L7) metrics, like HTTP status codes or request latency from your services. This data can be exported to monitoring tools like Prometheus.

The necessity to speed up deployments or to leverage testing techniques, such as Canary deployment or A/B testing, increases the complexity of network management and generates new security issues that were not present, at least not to this extent, in older fashioned deployments. Consul offers you a unified interface for traffic shaping in your network.

»Prerequisites

To implement the use cases discussed in this tutorial you will need a Kubernetes cluster with Consul service mesh enabled. In case you do not have one and are interested in testing the use cases you can follow Understand Consul service mesh to deploy a Kubernetes cluster locally and deploy Consul as service mesh.

»Choose a proxy as data plane

Once deployed, Consul service mesh becomes your control plane and allows you to choose between multiple proxies as the data plane.

  • Consul includes its own built-in Layer 4 (L4) proxy for testing and development with Consul. The built-in proxy is useful when testing configuration entries and when checking mTLS or intentions but it doesn't have the L7 capabilities necessary for the observability features, released with Consul 1.5, that you are going to learn in this tutorial.

  • Consul service mesh has first class support for Envoy as a proxy. Consul can configure Envoy sidecars to proxy http/1.1, http2, or gRPC traffic at L7 or any other TCP-based protocol at L4. Also permits you to inject custom Envoy configuration in the proxy service definition allowing you to use the more powerful features of Envoy that might not yet be exposed by Consul. Envoy is the default choice for the official Consul Helm chart.

Any proxy can be extended to support Consul service mesh as long as it is able to accept inbound connections and/or establish outbound connections identified as a particular service. Consul exposes /v1/agent/connect/* API endpoints that permit a proxy to validate certificates for the mTLS connection and authorize or deny it based on the Consul configuration.

»Observability with sidecar proxies

If the proxy you use as your data plane exposes L7 metrics Consul permits you to configure the metrics destination and service protocol you want to monitor and aggregate the results in your monitoring pipeline.

Starting with version 1.5, Consul is able to configure Envoy proxies to collect L7 metrics including HTTP status codes and request latency, along with many others, and export those to monitoring tools like Prometheus. If you are using Kubernetes, the Consul official Helm chart can simplify much of the necessary configuration, which you can learn about in the observability tutorial.

»Understand L7 traffic management

In addition to helping you introduce observability practices in your infrastructure, Consul service mesh offers a flexible traffic management set of options.

Techniques such as canary deployment, A/B testing, or blue/green deploys, are gaining popularity and their implementation increases the complexity of network management and generates new security issues that were not present, at least not to this extent, in older fashioned deployments. One of the main challenges is how to properly shape the traffic.

Using the service catalog and health checks as a foundation, Consul service mesh provides, on top of that, a three stage traffic management approach. This approach helps you carve up a single datacenter's pool of services beyond simply returning all healthy instances for load balancing and it gives you a finer granularity than the level of a single service when deciding that a specific subset of the service instance should receive traffic.

Proxy upstreams are discovered using a series of stages:

  • routing,
  • splitting,
  • resolution.

These stages represent different ways of managing L7 traffic. Each stage of this discovery process can be dynamically reconfigured via various configuration entries. When a configuration entry is missing, that stage will fall back on default behavior.

»Routing

Routing is the first step of traffic management and allows the interception of traffic using L7 criteria such as path prefixes or http headers, and changes behavior by sending traffic to a different service or service subset.

A service-router configuration entry kind may only reference service-splitter or service-resolver entries.

»Splitting

A splitter configuration entry allows for a user to choose to split incoming requests across different subsets of a single service (like during staged canary rollouts), or perhaps across different services (like during a v2 rewrite or other type of codebase migration).

A service-splitter configuration may only reference other service-splitter or a service-resolver entry.

»Resolution

A resolver configuration entry allows for a user to define which instances of a service should satisfy discovery requests for the provided name.

These configuration entries may only reference other service-resolver entries. Examples of things you can do with resolver configuration entries:

  • Control where to send traffic if all instances of api service in the current datacenter are unhealthy.
  • Configure service subsets based on Service.Meta.version values.
  • Send all traffic for web that does not specify a service subset to the version1 subset.
  • Send all traffic for api to new-api.
  • Send all traffic for api in all datacenters to instances of api in dc2.
  • Create a "virtual service" api-dc2 that sends traffic to instances of api in dc2. This can be referenced in upstreams or in other configuration entries.

If no resolver configuration is defined for a service it is assumed 100% of traffic flows to the healthy instances of a service with the same name in the current datacenter/namespace and discovery terminates.

»Create service defaults with central configuration

Depending on the stage of your cloud journey you might need different control over both observability and traffic management.

Configuration entries for both can be created to provide cluster-wide defaults for various aspects of your service mesh. When the agent is configured to enable central service configurations, it will look for service configuration defaults that match a registering service instance. If it finds any, the agent will merge those defaults with the service instance configuration. This allows for things like service protocol or proxy configuration to be defined globally and inherited by any affected service registrations.

»Challenge: Deploy more services in your mesh

In Secure Applications with Consul Service Mesh you used the api.yml and web.yml to deploy two services into your Consul service mesh. The blueprint provides you with some other files, counting-service.yaml and dashboard-service.yaml, containing other services definitions. After completing this collection try deploying more services and creating intentions for them.

»Next steps

Prerequisites for the service mesh network approach are the resolution of service discovery and service health monitoring. The service mesh functionalities you used in this collection are built on top of Consul service discovery and the availability of the service catalog. Consul is already able to scale to thousands of nodes when used for service discovery and that makes it an excellent primitive for the service mesh functionalities.