Day 1: Deploying Your First Datacenter

Consul Reference Architecture

As you migrate applications to dynamically provisioned infrastructure, you will encounter challenges scaling services and managing the communications between them. Consul helps the components of dynamic applications communicate with each other by providing service discovery. It monitors the health of each node and application so that it only exposes healthy instances as discoverable. Consul's distributed Key-Value store allows you to make runtime-configuration updates across global infrastructure.

This document recommends best practices and provides a reference architecture, including system requirements, datacenter design, networking, and performance optimizations for Consul production deployments.

» Infrastructure Requirements

» Consul Servers

Consul server agents maintain the cluster state, respond to RPC queries (read operations), and process all write operations. Because Consul server agents do most of the heavy lifting, their host sizing is critical for the overall performance, efficiency, and health of the Consul cluster.

The following table provides high-level server host guidelines. Of particular note is the strong recommendation to avoid non-fixed performance CPUs, or "Burstable CPU".

Type CPU Memory Disk Typical Cloud Instance Types
Small 2 core 8-16 GB RAM 50GB AWS: m5.large, m5.xlarge
Azure: Standard_A4_v2, Standard_A8_v2
GCE: n1-standard-8, n1-standard-16
Large 4-8 core 32-64 GB RAM 100GB AWS: m5.2xlarge, m5.4xlarge
Azure: Standard_D4_v3, Standard_D5_v3
GCE: n1-standard-32, n1-standard-64

» Hardware Sizing Considerations

  • The small size would be appropriate for most initial production deployments, or for development/testing environments.

  • The large size is for production environments where there is a consistently high workload.

For more information on server requirements, review the server performance documentation.

» Infrastructure Diagram

Reference Diagram

» Datacenter Design

You may deploy a Consul cluster (typically three or five servers plus client agents) in a single physical datacenter or across multiple datacenters. For a large cluster with high runtime reads and writes, deploying servers in the same physical location improves performance. In cloud environments, you may deploy a single datacenter across multiple availability zones, i.e., each server in a separate availability zone on a single host. Consul also supports multi-datacenter deployments via separate clusters joined by WAN links. In some cases, you may also deploy two or more Consul clusters in the same LAN environment.

» Single Datacenter

We recommend a single Consul cluster for applications deployed in the same datacenter. Consul supports traditional three-tier applications as well as microservices.

Typically, you will need between three or five servers to balance between availability and performance. These servers together run the Raft-driven consistent state store for updating catalog, session, prepared query, ACL, and KV state.

The recommended maximum cluster size for a single datacenter is 5,000 nodes. For a write-heavy and/or a read-heavy cluster, you may need to reduce the maximum number of nodes further, depending on the number and the size of KV pairs and the number of watches. As you add more client machines it takes more time for gossip to converge. Similarly, when a new server joins an existing multi-thousand node cluster with a large KV store it may take more time to replicate the store to the new server's log, and the update rate may increase.

Service tags help you make necessary queries against your cluster. They can help you distinguish between different services, or different versions of the same service. Without them, node searches based on a a specific service are impossible.

In cases where agents can't all contact each other due to network segmentation, you can use Consul's network segments (Consul Enterprise only) to create multiple tenants that share Raft servers in the same cluster. Each tenant has its own gossip pool and doesn't communicate with the agents outside this pool. All the tenants, however, do share the KV store. If you don't have access to Consul network segments you can create discrete Consul datacenters to isolate agents from each other.

» Multiple Datacenters

You can join Consul clusters running the same service in different datacenters by WAN links. The clusters operate independently and only communicate over the WAN on port 8302. Unless explicitly configured via CLI or API, Consul servers will only return results from their local datacenter. Consul does not replicate data between multiple datacenters, but you can use the consul-replicate tool to replicate the KV data periodically.

The network areas feature in Consul Enterprise provides advanced federation. For example, imagine that datacenter1 (dc1) hosts services like LDAP (or an ACL database) and shares them with datacenter2 (dc2) and datacenter3 (dc3). However, due to compliance issues, servers in dc2 must not connect with servers in dc3. Basic WAN federation can't isolate dc2 from dc3; it requires that all the servers in dc1, dc2 and dc3 are connected in a full mesh and opens both gossip (8302 tcp/udp) and RPC (8300) ports for communication.

Network areas allows peering between datacenters to make shared services discoverable over WAN. With network areas, servers in dc1 can communicate with those in dc2 and dc3, without a connection between dc2 and dc3. This meets the compliance requirement of the organization in our example use case. Servers that are part of the network area communicate over RPC only. This removes the overhead of sharing and maintaining the symmetric key used by the gossip protocol across datacenters. It also reduces the attack surface at the gossip ports since they no longer need to be opened in security gateways or firewalls.

Consul's prepared queries allow clients to failover to another datacenter for service discovery. For example, if a service payment in the local datacenter dc1 goes down, a prepared query lets users define a geographic fallback order to the nearest datacenter to check for healthy instances of the same service.

Prepared queries, by default, resolve the query in the local datacenter first. They don't support querying KV store features, and do work with ACLs. Prepared query config/templates are maintained consistently in Raft and are executed on the servers.

» Network Connectivity

LAN gossip occurs between all agents in a single datacenter with each agent sending a periodic probe to random agents from its member list. Both client and server agents participate in the gossip. The initial probe is sent over UDP every second. If a node fails to acknowledge within 200ms, the agent pings over TCP. If the TCP probe fails (10 second timeout), it asks a configurable number of random nodes to probe the same node (also known as an indirect probe). If there is no response from the peers regarding the status of the node, that agent is marked as down.

The agent's status directly affects the service discovery results. If an agent is down, the services it is monitoring will also be marked as down.

In addition, the agent also periodically performs a full state sync over TCP which gossips each agent's understanding of the member list around it (node names, IP addresses, and health status). These operations are expensive relative to the standard gossip protocol mentioned above and are performed at a rate determined by cluster size to keep overhead low. It's typically between 30 seconds and 5 minutes. For more details, refer to Serf Gossip docs.

In a larger network that spans L2 segments, traffic typically traverses through a firewall and/or a router. You must update your ACL or firewall rules to allow the following ports:

Name Port Flag Description
Server RPC 8300 Used by servers to handle incoming requests from other agents. TCP only.
Serf LAN 8301 Used to handle gossip in the LAN. Required by all agents. TCP and UDP.
Serf WAN 8302 -1 to disable (available in Consul 1.0.7) Used by servers to gossip over the LAN and WAN to other servers. TCP and UDP.
HTTP API 8500 -1 to disable Used by clients to talk to the HTTP API. TCP only.
DNS Interface 8600 -1 to disable

By default agents will only listen for HTTP and DNS traffic on the local interface.

For more information about the ports that Consul uses, see the ports section of the agent configuration documentation.

» Summary

In this guide we've discussed considerations for deploying Consul, including hardware sizing, datacenter design, and network connectivity. Next, review the Deployment Guide to learn the steps required to install and configure a single HashiCorp Consul cluster.