Getting Started

Run the Consul Agent

After Consul is installed, the agent must be run. The agent can run either in server or client mode. Each datacenter must have at least one server, though a cluster of 3 or 5 servers is recommended. A single server deployment is highly discouraged as data loss is inevitable in a failure scenario.

All other agents run in client mode. A client is a very lightweight process that registers services, runs health checks, and forwards queries to servers. The agent must be running on every node that is part of the cluster.

For more detail on bootstrapping a datacenter, see this guide.

» Starting the Agent

For simplicity, we'll start the Consul agent in development mode for now. This mode is useful for bringing up a single-node Consul environment quickly and easily. It is not intended to be used in production as it does not persist any state.

$ consul agent -dev
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.5.1'
           Node ID: '18339e2f-2b5e-cfa8-3ebe-a60cba24bb2a'
         Node name: 'Kaitlins-MBP'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
      Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2019/06/12 12:39:11 [DEBUG] agent: Using random ID "18339e2f-2b5e-cfa8-3ebe-a60cba24bb2a" as node ID
    2019/06/12 12:39:11 [DEBUG] tlsutil: Update with version 1
    2019/06/12 12:39:11 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1
    2019/06/12 12:39:11 [DEBUG] tlsutil: IncomingRPCConfig with version 1
    2019/06/12 12:39:11 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1
    2019/06/12 12:39:11 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:18339e2f-2b5e-cfa8-3ebe-a60cba24bb2a Address:127.0.0.1:8300}]
    2019/06/12 12:39:11 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
    2019/06/12 12:39:11 [INFO] serf: EventMemberJoin: Kaitlins-MBP.dc1 127.0.0.1
    2019/06/12 12:39:11 [INFO] serf: EventMemberJoin: Kaitlins-MBP 127.0.0.1
    2019/06/12 12:39:11 [INFO] consul: Handled member-join event for server "Kaitlins-MBP.dc1" in area "wan"
    2019/06/12 12:39:11 [INFO] consul: Adding LAN server Kaitlins-MBP (Addr: tcp/127.0.0.1:8300) (DC: dc1)
    2019/06/12 12:39:11 [DEBUG] agent/proxy: managed Connect proxy manager started
    2019/06/12 12:39:11 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2019/06/12 12:39:11 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2019/06/12 12:39:11 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
    2019/06/12 12:39:11 [INFO] agent: started state syncer
    2019/06/12 12:39:11 [INFO] agent: Started gRPC server on 127.0.0.1:8502 (tcp)
    2019/06/12 12:39:11 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2019/06/12 12:39:11 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2
    2019/06/12 12:39:11 [DEBUG] raft: Votes needed: 1
    2019/06/12 12:39:11 [DEBUG] raft: Vote granted from 18339e2f-2b5e-cfa8-3ebe-a60cba24bb2a in term 2. Tally: 1
    2019/06/12 12:39:11 [INFO] raft: Election won. Tally: 1
    2019/06/12 12:39:11 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state
    2019/06/12 12:39:11 [INFO] consul: cluster leadership acquired
    2019/06/12 12:39:11 [INFO] consul: New leader elected: Kaitlins-MBP

As you can see, the Consul agent has started and has output some log data. From the log data, you can see that our agent is running in server mode and has claimed leadership of the cluster. Additionally, the local member has been marked as a healthy member of the cluster.

» Cluster Members

If you run consul members in another terminal, you can see the members of the Consul cluster. We'll cover joining clusters in the next section, but for now, you should only see one member (yourself):

$ consul members
Node          Address         Status  Type    Build  Protocol  DC   Segment
Kaitlins-MBP  127.0.0.1:8301  alive   server  1.5.1  2         dc1  <all>

The output shows our own node, the address it is running on, its health state, its role in the cluster, and some version information. Additional metadata can be viewed by providing the -detailed flag.

The output of the members command is based on the gossip protocol and is eventually consistent. That is, at any point in time, the view of the world as seen by your local agent may not exactly match the state on the servers. For a strongly consistent view of the world, use the HTTP API as it forwards the request to the Consul servers:

$ curl localhost:8500/v1/catalog/nodes
[
    {
        "ID": "18339e2f-2b5e-cfa8-3ebe-a60cba24bb2a",
        "Node": "Kaitlins-MBP",
        "Address": "127.0.0.1",
        "Datacenter": "dc1",
        "TaggedAddresses": {
            "lan": "127.0.0.1",
            "wan": "127.0.0.1"
        },
        "Meta": {
            "consul-network-segment": ""
        },
        "CreateIndex": 9,
        "ModifyIndex": 10
    }
]

In addition to the HTTP API, the DNS interface can be used to query the node. Note that you have to make sure to point your DNS lookups to the Consul agent's DNS server which runs on port 8600 by default. The format of the DNS entries (such as "Armons-MacBook-Air.node.consul") will be covered in more detail later.

$ dig @127.0.0.1 -p 8600 Kaitlins-MBP.node.consul

; <<>> DiG 9.10.6 <<>> @127.0.0.1 -p 8600 Kaitlins-MBP.node.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64657
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2
;; WARNING: recursion requested but not available

;; ANSWER SECTION:
Kaitlins-MBP.node.consul. 0 IN  A 127.0.0.1

;; ADDITIONAL SECTION:
Kaitlins-MBP.node.consul. 0 IN  TXT "consul-network-segment="

» Stopping the Agent

You can use Ctrl-C (the interrupt signal) to gracefully halt the agent. After interrupting the agent, you should see it leave the cluster and shut down.

By gracefully leaving, Consul notifies other cluster members that the node left. If you had forcibly killed the agent process, other members of the cluster would have detected that the node failed. When a member leaves, its services and checks are removed from the catalog. When a member fails, its health is simply marked as critical, but it is not removed from the catalog. Consul will automatically try to reconnect to failed nodes, allowing it to recover from certain network conditions, while left nodes are no longer contacted.

Additionally, if an agent is operating as a server, a graceful leave is important to avoid causing a potential availability outage affecting the consensus protocol. See the Adding and Removing Servers guide for details on how to safely add and remove servers.