Vault 1.2 first introduced an internal storage backend, Integrated Storage, as a technical preview in addition to supported external storage types. (Integrated Storage became generally available in Vault 1.4.) Using the integrated storage, data gets replicated to all the nodes in the cluster using the raft consensus protocol. The management of the nodes in the cluster was a manual process.
Vault 1.7 introduced autopilot to simplify and automate the cluster management for integrated storage. The autopilot includes:
- Cluster node health check
- Server stabilization: prevent disruption to raft quorum due to an unstable new node
- Monitor newly added node health for a period and decide promotion to voter status
- Dead server cleanup - periodic, automatic clean-up of failed servers
Autopilot is enabled by default upon upgrading to Vault 1.7. Server stabilization works by default, but you need to enable the dead server cleanup explicitly which you will learn in the autopilot configuration section.
»Prerequisites
This tutorial requires Vault, sudo access, and additional configuration to create the cluster.
»Scenario setup
To demonstrate the autopilot feature, you will start 6 Vault instances, each listens to a different port as shown in the diagram below.
- vault_1 (
http://127.0.0.1:8100
) is initialized and unsealed. The root token creates a transit key that enables the other Vaults auto-unseal. This Vault server is not a part of the cluster. - vault_2 (
http://127.0.0.1:8200
) is initialized and unsealed. This Vault starts as the cluster leader. An example K/V-V2 secret is created. - vault_3 (
http://127.0.0.1:8300
) is started and automatically joins the cluster viaretry_join
. - vault_4 (
http://127.0.0.1:8400
) is started and automatically joins the cluster viaretry_join
. - vault_5 (
http://127.0.0.1:8500
) is started and automatically joins the cluster viaretry_join
. - vault_6 (
http://127.0.0.1:8600
) is started and automatically joins the cluster viaretry_join
.
If this is your first time setting up a Vault cluster with integrated storage, go through the Vault HA Cluster with Integrated Storage tutorial.
Retrieve the configuration by cloning or downloading the
hashicorp/vault-guides
repository from GitHub.Clone the repository.
$ git clone https://github.com/hashicorp/vault-guides.git
Or download the repository.
This repository contains supporting content for all of the Vault learn tutorials. The content specific to this tutorial can be found within a sub-directory.
Change the working directory to
vault-guides/operations/raft-autopilot/local
.$ cd vault-guides/operations/raft-autopilot/local
Set the
run-all.sh
file to executable.$ chmod +x run_all.sh
Execute the
run_all.sh
script to spin up a Vault cluster with 5 nodes.$ ./run_all.sh [vault_1] Creating configuration - creating /git/vault-guides/operations/raft-autopilot/local/config-vault_1.hcl [vault_2] Creating configuration - creating /git/vault-guides/operations/raft-autopilot/local/config-vault_2.hcl - creating /git/vault-guides/operations/raft-autopilot/local/raft-vault_2 ...snip... [vault_5] starting Vault server @ http://127.0.0.1:8500 Using [vault_1] root token (s.5rjDMzU5Kj9bImUVaqPpihAo) to retrieve transit key for auto-unseal [vault_6] starting Vault server @ http://127.0.0.1:8600 Using [vault_1] root token (s.5rjDMzU5Kj9bImUVaqPpihAo) to retrieve transit key for auto-unseal
You can find the server configuration files and the log files in the working directory.
Verify the cluster.
$ vault operator raft list-peers Node Address State Voter ---- ------- ----- ----- vault_2 127.0.0.1:8201 leader true vault_3 127.0.0.1:8301 follower true vault_4 127.0.0.1:8401 follower true vault_5 127.0.0.1:8501 follower true vault_6 127.0.0.1:8601 follower true
The
vault_2
is the leader.
»Understand the autopilot behavior
View the help message for the
vault operator raft autopilot
command.$ vault operator raft autopilot -help This command is accessed by using one of the subcommands below. Subcommands: get-config Returns the configuration of the autopilot subsystem under integrated storage set-config Modify the configuration of the autopilot subsystem under integrated storage state Displays the state of the raft cluster under integrated storage as seen by autopilot
Display the current cluster status.
$ vault operator raft autopilot state Healthy: true Failure Tolerance: 2 Leader: vault_2 Voters: vault_2 vault_3 vault_4 vault_5 vault_6 Servers: vault_2 Name: vault_2 Address: 127.0.0.1:8201 Status: leader Node Status: alive Healthy: true Last Contact: 0s Last Term: 3 Last Index: 118 vault_3 Name: vault_3 Address: 127.0.0.1:8301 Status: voter Node Status: alive Healthy: true Last Contact: 1.73895338s Last Term: 3 Last Index: 118 vault_4 Name: vault_4 Address: 127.0.0.1:8401 Status: voter Node Status: alive Healthy: true Last Contact: 4.68575147s Last Term: 3 Last Index: 118 vault_5 Name: vault_5 Address: 127.0.0.1:8501 Status: voter Node Status: alive Healthy: true Last Contact: 2.630693989s Last Term: 3 Last Index: 118 vault_6 Name: vault_6 Address: 127.0.0.1:8601 Status: voter Node Status: alive Healthy: true Last Contact: 579.174724ms Last Term: 3 Last Index: 118
This displays the overall health of the cluster, and its failure tolerance.
The current leader node is vault_2
. The Failure Tolerance is 2
;
therefore, you can lose up to 2 nodes and still maintain the quorum. The
healthy parameter value is true
for all nodes in the cluster.
Refer to the deployment table for the quorum size and failure tolerance for various cluster sizes.
»Stop one of the nodes
Set the
cluster.sh
file to executable.$ chmod +x cluster.sh
Stop
vault_6
.$ ./cluster.sh stop vault_6 Found 1 Vault service(s) matching that name [vault_6] stopping
Optional: You can verify that
vault_6
is not running.$ ps | grep vault 41873 ttys009 0:34.57 vault server -log-level=trace -config <path>/config-vault_1.hcl 41919 ttys009 11:07.38 vault server -log-level=trace -config <path>/config-vault_2.hcl 41966 ttys009 1:50.94 vault server -log-level=trace -config <path>/config-vault_3.hcl 41982 ttys009 1:52.26 vault server -log-level=trace -config <path>/config-vault_4.hcl 41998 ttys009 1:50.86 vault server -log-level=trace -config <path>/config-vault_5.hcl 45834 ttys009 0:00.01 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox vault
Check the cluster health.
$ vault operator raft autopilot state
Notice that the Healthy state of the cluster is
false
, and the Failure Tolerance is now1
.Healthy: false Failure Tolerance: 1 Leader: vault_2 Voters: vault_2 vault_3 vault_4 vault_5 vault_6 ...snip...
Now, the Healthy parameter value is
false
on the cluster, and the Failure Tolerance is1
. The Healthy state of thevault_6
isfalse
; therefore, you know which node failed....snip... vault_6 Name: vault_6 Address: 127.0.0.1:8601 Status: voter Node Status: alive Healthy: false Last Contact: 55.577082309s Last Term: 3 Last Index: 154
Although
vault_6
is no longer running, it is still a cluster member at this point.
»Autopilot configuration
Check the autopilot settings to see the default behavior.
Parameter | Description |
---|---|
Cleanup Dead Servers (bool ) | Specifies automatic removal of dead server nodes periodically. |
Last Contact Threshold (string ) | Limit the amount of time a server can go without leader contact before being considered unhealthy. |
Dead Server Last Contact Threshold (string ) | Limit the amount of time a server can go without leader contact before being considered failed. |
Server Stabilization Time | Minimum amount of time a server must be stable in the 'healthy' state before being added to the cluster. |
Min Quorum (int ) | Minimum number of servers allowed in ca cluster before autopilot can prune dead servers. |
Max Trailing Logs (int ) | Maximum number of log entries in the Raft log that a server can be behind its leader before being considered unhealthy. |
Check the current autopilot configuration.
$ vault operator raft autopilot get-config Key Value --- ----- Cleanup Dead Servers false Last Contact Threshold 10s Dead Server Last Contact Threshold 24h0m0s Server Stabilization Time 10s Min Quorum 0 Max Trailing Logs 1000
The Cleanup Dead Servers parameter is set to
false
.Update the autopilot configuration to enable the dead server cleanup. For demonstration, set the Dead Server Last Contact Threshold to 10 seconds, and the Server Stabilization Time to 30 seconds.
$ vault operator raft autopilot set-config \ -dead-server-last-contact-threshold=10 \ -server-stabilization-time=30 \ -cleanup-dead-servers=true \ -min-quorum=3
Verify the configuration.
$ vault operator raft autopilot get-config Key Value --- ----- Cleanup Dead Servers true Last Contact Threshold 10s Dead Server Last Contact Threshold 10s Server Stabilization Time 30s Min Quorum 3 Max Trailing Logs 1000
Check the cluster health.
$ vault operator raft autopilot state Healthy: true Failure Tolerance: 1 Leader: vault_2 Voters: vault_2 vault_3 vault_4 vault_5 Servers: ...snip...
The cluster's Healthy parameter value is back to
true
. Notice thatvault_6
is no longer listed. The Voters parameter listsvault_2
throughvault_5
.Check the cluster peers to double-chck.
$ vault operator raft list-peers Node Address State Voter ---- ------- ----- ----- vault_2 127.0.0.1:8201 leader true vault_3 127.0.0.1:8301 follower true vault_4 127.0.0.1:8401 follower true vault_5 127.0.0.1:8501 follower true
»Add a new node to the cluster
Explore how the autopilot configuration settings influence the cluster when you add a new node.
Add a new node (
vault_7
) to the cluster.$ ./cluster.sh setup vault_7 [vault_7] starting Vault server @ http://127.0.0.1:8700 Using [vault_1] root token (s.wsEIMfqTipb0mZT051TNbcYJ) to retrieve transit key for auto-unseal
List the cluster members.
$ vault operator raft list-peers Node Address State Voter ---- ------- ----- ----- vault_2 127.0.0.1:8201 leader true vault_3 127.0.0.1:8301 follower true vault_4 127.0.0.1:8401 follower true vault_5 127.0.0.1:8501 follower true vault_7 127.0.0.1:8701 follower false
Notice that the
vault_7
server is a non-voter. (The Voter parameter value isfalse
.)Check the cluster health.
$ vault operator raft autopilot state Healthy: true Failure Tolerance: 1 Leader: vault_2 Voters: vault_2 vault_3 vault_4 vault_5 Servers: ...snip... vault_7 Name: vault_7 Address: 127.0.0.1:8701 Status: non-voter Node Status: alive Healthy: true Last Contact: 2.580581282s Last Term: 3 Last Index: 78
The
vault_7
server joins the cluster as a non-voter until the Server Stabilization Time of 30 seconds elapses.
Wait for 30 seconds and check the cluster peers.
$ vault operator raft list-peers Node Address State Voter ---- ------- ----- ----- vault_2 127.0.0.1:8201 leader true vault_3 127.0.0.1:8301 follower true vault_4 127.0.0.1:8401 follower true vault_5 127.0.0.1:8501 follower true vault_7 127.0.0.1:8701 follower true
Now, the
vault_7
server should be a voter. This is a part of the server stabilization mechanism of the autopilot.
Vault Enterprise: The explicit non-voter nodes behave the same way as before and remain non-voters as designed. If the dead server cleanup is enabled, it will prune failed non-voters.
»Configure the state check interval
By default, the autopilot picks up any state change an interval of 10 seconds.
To change the default, set the autopilot_reconcile_interval
parameter inside
the storage
stanza in the server configuration file.
Example: The following server configuration file sets the autopilot to picks up state change an interval of 15 seconds.
storage "raft" {
path = "/path/to/raft/data"
node_id = "raft_node_1"
# overwrite the default interval
autopilot_reconcile_interval = "15"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = true
}
cluster_addr = "http://127.0.0.1:8201"
»Clean up
The cluster.sh
script provides a clean
operation that removes all services,
configuration, and modifications to your local system.
Clean up your local workstation.
$ ./cluster.sh clean
Found 1 Vault service(s) matching that name
[vault_1] stopping
...snip...
Removing log file /git/vault-guides/operations/raft-autopilot/local/vault_5.log
Removing log file /git/vault-guides/operations/raft-autopilot/local/vault_6.log
Clean complete