Virtual Event
Join us for the next HashiConf Digital October 12-15, 2020 Register for Free

Operations

Use Integrated Storage for HA Coordination

»Challenge

In most common scenarios, you configure the Vault server to use a storage backend that supports high availability (HA); therefore, the storage backend stores the Vault data while maintaining the HA coordination. However, not all storage backends support HA (e.g. Amazon S3, Cassandra, MSSQL). In some cases, you may need to use a storage backend that does not have HA support which means that you can only have a single-node Vault deployment instead of an HA cluster.

Diagram

»Solution

When you need to use a storage backend that does not support HA, ha_storage stanza can be specified along with the storage stanza in the Vault server configuration to handle the HA coordination. By doing so, you can add additional Vault nodes for fault tolerance.

The ha_storage must be an HA-supporting storage backend.

»Terminology

  • HA coordination: This is used for intra-cluster high availability (in case one node fails). Without this enabled, operators cannot have multiple Vault nodes join a cluster, and there is no failover scenario in place. HA coordination helps manage the lock that nodes use to agree on who the leader is.

  • Storage Backend: This is the storage to where Vault writes it data. Some storage backends such as Integrated Storage and Consul support HA coordination by default. Others (e.g. Amazon S3, Cassandra, MSSQL) do not.

  • Integrated Storage: This is a purely Vault internal storage option and eliminates the need to manage a separate storage backend. It supports HA coordination by default.

»Prerequisites

To perform the steps in this tutorial, you need Vault 1.5 or later.

NOTE: An interactive tutorial is also available if you do not wish to run Vault locally. Click the Show Tutorial button to launch the tutorial.

»Create a new HA cluster

The /new_cluster/cluster.sh script configures and starts three Vaults. Here's the diagram explaining what the script would produce.

Scenario

For demonstration, vault_1 listens to port 8210, vault_2 listens to port 8220, and vault_3 listens to port 8230.

  1. Pull the demo scripts by cloning or downloading the hashicorp/vault-guides repository from GitHub.

    Clone the repository:

    $ git clone https://github.com/hashicorp/vault-guides.git
    

    Or, download the repository contents:

    Download

    This repository contains supporting content for all of the Vault learn guides. The content specific to this guide can be found within a sub-directory.

  2. Change the working directory to vault-guides/operations/raft-ha-storage/new_cluster.

    $ cd vault-guides/operations/raft-ha-storage/new_cluster
    
  3. Set the cluster.sh file to executable.

    $ chmod +x cluster.sh
    
  4. Create the configuration for each Vault.

    $ ./cluster.sh create config
    
    [vault_1] Creating configuration
      - creating $demo_home/config-vault_1.hcl
      - creating $demo_home/raft-vault_1
    [vault_2] Creating configuration
      - creating $demo_home/config-vault_2.hcl
      - creating $demo_home/raft-vault_2
    [vault_3] Creating configuration
      - creating $demo_home/config-vault_3.hcl
      - creating $demo_home/raft-vault_3
    
  5. Review the server configuration file for vault_1.

    $ cat config-vault_1.hcl
    

    The storage stanza is configured to use filesystem as the storage backend which does not support HA. All encrypted Vault data gets stored in the filesystem ($demo_home/vault-storage-file/).

    To support HA, the ha_storage stanza is configured. As of Vault 1.5, integrated storage can be used as a HA storage. Therefore, HA coordination information (e.g. which Vault server is the active node) are stored in the ha_storage which is separate from the Vault data.

    ha_storage "raft" {
      path    = "$demo_home/ha-raft_1/"
      node_id = "vault_1"
    }
    
    storage "file" {
      path = "$demo_home/vault-storage-file/"
    }
    
    listener "tcp" {
      address = "127.0.0.1:8210"
      cluster_address = "127.0.0.1:8211"
      tls_disable = true
    }
    
    disable_mlock = true
    api_addr = "http://127.0.0.1:8210"
    cluster_addr = "http://127.0.0.1:8211"
    

    Similarly, config-vault_2.hcl and config-vault_3.hcl look very much the same. They both point to the same file storage.

  1. Setup vault_1.

    $ ./cluster.sh setup vault_1
    

    The script first terminates all Vault server instances that are running locally (if there is any), and then start a Vault server, vault_1.

    [vault_1] starting Vault server @ vault_1
    ++ nc -w 1 localhost 8210
    ++ VAULT_API_ADDR=http://127.0.0.1:8210
    ++ vault server -log-level=trace -config $demo_home/config-vault_1.hcl
    

    Once the server is running, the script initializes and unseals the server.

    [vault_1] initializing and capturing the recovery key and root token
    ++ vault_1 operator init -format=json -key-shares=1 -key-threshold=1
    ++ export VAULT_ADDR=http://127.0.0.1:8210
    ++ VAULT_ADDR=http://127.0.0.1:8210
    ++ vault operator init -format=json -key-shares=1 -key-threshold=1
    
    ...snip...
    
    ++ vault operator unseal z8+w9bV70i4ZluxVZA53tn3eMBOC/zM+8XlAs9LX2yw=
    

    It logs in with the initial root token and enable the kv-v2 secrets engine at kv, and create some secrets at the kv/apikey.

    [vault_1] logging in and enabling the KV secrets engine
    ...
    ++ vault secrets enable -path=kv kv-v2
    Success! Enabled the kv-v2 secrets engine at: kv/
    ...
    
    [vault_1] storing secret 'kv/apikey' for testing
    ++ vault_1 kv put kv/apikey webapp=ABB39KKPTWOR832JGNLS02
    ...
    
  2. Setup vault_2.

    $ ./cluster.sh setup vault_2
    

    When vault_2 starts up and unsealed, the HA Enabled is set to true.

    [vault_2] Unseal vault_2
    ++ cat $demo_home/unsealKey1
    ++ vault_2 operator unseal iEXMlG2E71qHTMgR7IpvW54qr79qZuw0Q74V3IrMS+o=
    ++ export VAULT_ADDR=http://127.0.0.1:8220
    ++ VAULT_ADDR=http://127.0.0.1:8220
    ++ vault operator unseal iEXMlG2E71qHTMgR7IpvW54qr79qZuw0Q74V3IrMS+o=
    Key                    Value
    ---                    -----
    Seal Type              shamir
    Initialized            true
    Sealed                 false
    Total Shares           1
    Threshold              1
    Version                1.5.0
    Cluster Name           vault-cluster-9a91eff6
    Cluster ID             32ddfb0d-bc42-9ea8-fbed-01d3c25976ed
    HA Enabled             true
    HA Cluster             n/a
    HA Mode                standby
    Active Node Address    <none>
    

    After vault_2 joins the raft cluster by invoking vault operator raft join command. Notice that there is no target leader API address provided with the command when you are using the raft as the ha_storage.

    [vault_2] Join the raft cluster
    ++ vault_2 operator raft join
    ++ export VAULT_ADDR=http://127.0.0.1:8220
    ++ VAULT_ADDR=http://127.0.0.1:8220
    ++ vault operator raft join
    Key       Value
    ---       -----
    Joined    true
    
    ...snip...
    
    [vault_2] Vault status
    Key                    Value
    ---                    -----
    Seal Type              shamir
    Initialized            true
    Sealed                 false
    Total Shares           1
    Threshold              1
    Version                1.5.0
    Cluster Name           vault-cluster-9a91eff6
    Cluster ID             32ddfb0d-bc42-9ea8-fbed-01d3c25976ed
    HA Enabled             true
    HA Cluster             https://127.0.0.1:8211
    HA Mode                standby
    Active Node Address    http://127.0.0.1:8210
    

    The server status shows that the Active Node Address is http://127.0.0.1:8210 (vault_1).

  3. Finally, setup vault_3.

    $ ./cluster.sh setup vault_3
    

    It executes the same workflow as the setup vault_2 command.

    [vault_3] Join the raft cluster
    ++ vault_3 operator raft join
    ++ export VAULT_ADDR=http://127.0.0.1:8230
    ++ VAULT_ADDR=http://127.0.0.1:8230
    ++ vault operator raft join
    Key       Value
    ---       -----
    Joined    true
    
    ...snip...
    
    [vault_3] List the raft cluster members
    ++ vault_3 operator raft list-peers
    ++ export VAULT_ADDR=http://127.0.0.1:8230
    ++ VAULT_ADDR=http://127.0.0.1:8230
    ++ vault operator raft list-peers
    Node       Address           State       Voter
    ----       -------           -----       -----
    vault_1    127.0.0.1:8211    leader      true
    vault_2    127.0.0.1:8221    follower    true
    vault_3    127.0.0.1:8231    follower    true
    

»Verification

The Vault data is stored in the filesystem as configured in the storage stanza ($demo_home/vault-storage-file/). Therefore, you can find the encrypted data under the $demo_home/vault-storage-file/logical folder.

├── logical
│   ├── c76dd57b-a877-a961-f04a-b51768e10711
│   │   └── 6df7744b-5f26-87e1-0472-cb1ad6d76852
│   │       ├── _salt
│   │       ├── _upgrading
│   │       ├── archive
│   │       │   └── _metadata
│   │       ├── metadata
│   │       │   └── _p0CdeBpoSMnLfh1JKimQZqNu8en3Bf58X4nDttnBnenxRAAf50fahf4CWy7N4ipgcrP
│   │       ├── policy
│   │       │   └── _metadata
│   │       └── versions
│   │           └── 737
│   │               └── _8ff6ea1e45535fa4ad0ec59ef1eeb24445569c035b78549d95318c77d14d8
│   └── d170231f-b8b4-cc16-33b2-06ecc78b9371
│       └── _casesensitivity
...

In the raft-vault_1, raft-vault_2 and raft-vault_3 directories, information necessary to coordinate the HA is stored. So, let's verify.

Currently, vault_1 is the leader. Observe what happens when vault_1 steps down.

$ VAULT_ADDR=http://127.0.0.1:8210 vault operator step-down

Success! Stepped down: http://127.0.0.1:8210

Check the cluster information.

$ VAULT_ADDR=http://127.0.0.1:8210 vault operator raft list-peers

Node       Address           State       Voter
----       -------           -----       -----
vault_1    127.0.0.1:8211    follower    true
vault_2    127.0.0.1:8221    leader      true
vault_3    127.0.0.1:8231    follower    true

Now, vault_2 is the leader.

When you are done exploring, you can clean up the environment using the cluster.sh script.

»Update an existing server

You have a Vault server which uses filesystem as its storage backend. Since filesystem storage backend does not support HA, you have a single node deployment. Now, you wish add two more nodes and turn it into a 3-node cluster.

Scenario 2

  1. Change your working directory to vault-guides/operations/raft-ha-storage/existing_server.

    $ cd ../existing_server
    
  2. Set the cluster.sh file to executable.

    $ chmod +x cluster.sh
    
  3. Start a server with filesystem storage backend.

    $ ./clsuter.sh setup vault_1
    

    This command spins up a Vault server listening to port 8210 which uses file storage backend, initialize and unseal the Vault server.

    [vault_1] Creating configuration
    
    ...snip...
    
    [vault_1] starting Vault server @ vault_1
    ++ nc -w 1 localhost 8210
    ++ VAULT_API_ADDR=http://127.0.0.1:8210
    
    ...snip...
    
    [vault_1] initializing and capturing the recovery key and root token
    ++ vault_1 operator init -format=json -key-shares=1 -key-threshold=1
    ++ export VAULT_ADDR=http://127.0.0.1:8210
    ++ VAULT_ADDR=http://127.0.0.1:8210
    ++ vault operator init -format=json -key-shares=1 -key-threshold=1
    

    It also enables K/V v2 secrets engine at kv and writes some test data.

    [vault_1] logging in and enabling the KV secrets engine
    ...snip...
    
    [vault_1] storing secret 'kv/apikey' for testing
    ++ vault_1 kv put kv/apikey webapp=ABB39KKPTWOR832JGNLS02
    ++ export VAULT_ADDR=http://127.0.0.1:8210
    ++ VAULT_ADDR=http://127.0.0.1:8210
    ++ vault kv put kv/apikey webapp=ABB39KKPTWOR832JGNLS02
    
  4. Review the created server configuration file, config-vault_1.hcl.

    $ cat config-vault_1.hcl
    
    storage "file" {
        path = "$demo_home/vault-storage-file/"
    }
    
    listener "tcp" {
      address = "127.0.0.1:8210"
      cluster_address = "127.0.0.1:8211"
      tls_disable = true
    }
    
    disable_mlock = true
    api_addr = "http://127.0.0.1:8210"
    cluster_addr = "http://127.0.0.1:8211"
    

    The storage stanza configures filesystem as the storage backend.

  5. The server status shows that HA Enabled is false.

    $ VAULT_ADDR=http://127.0.0.1:8210 vault status
    
    Key             Value
    ---             -----
    Seal Type       shamir
    Initialized     true
    Sealed          false
    Total Shares    1
    Threshold       1
    Version         1.5.0
    Cluster Name    vault-cluster-f467afe0
    Cluster ID      8380262f-64a4-6eb5-716a-4a1fef8ce153
    HA Enabled      false
    
  6. Stop vault_1 before updating its configuration.

    $ ./cluster.sh stop vault_1
    
    Found 1 Vault service(s) matching that name
    [vault_1] stopping
    
  7. Upate vault_1 to define its ha_storage.

    $ ./cluster.sh update
    

    This command updates the config-vault_1.hcl with ha_storage stanza.

    $ cat config-vault_1.hcl
    
    ha_storage "raft" {
      path    = "$demo_home/raft-vault_1/"
      node_id = "vault_1"
    }
    
    storage "file" {
        path = "$demo_home/vault-storage-file/"
    }
    
    ##...snip...
    

    The script restart vault_1 with updated configuration and then unseal.

    [vault_1] Bootstrap: vault write -f sys/storage/raft/bootstrap
    ++ vault_1 write -f sys/storage/raft/bootstrap
    ++ export VAULT_ADDR=http://127.0.0.1:8210
    ++ VAULT_ADDR=http://127.0.0.1:8210
    ++ vault write -f sys/storage/raft/bootstrap
    Success! Data written to: sys/storage/raft/bootstrap
    

    You can see that the server status shows HA Enabled to be true in the output after bootstrapping.

    Key             Value
    ---             -----
    Seal Type       shamir
    Initialized     true
    Sealed          false
    Total Shares    1
    Threshold       1
    Version         1.5.0
    Cluster Name    vault-cluster-f467afe0
    Cluster ID      8380262f-64a4-6eb5-716a-4a1fef8ce153
    HA Enabled      true
    HA Cluster      https://127.0.0.1:8211
    HA Mode         active
    

    Scenario 2

  8. You can now add additional nodes, vault_2.

    $ ./cluster.sh setup vault_2
    

    This command creates a server configuration file, config-vault_2.hcl, start the server which listens to port 8220, and unseals the server.

    [vault_2] Creating configuration
    ...snip...
    
    [vault_2] starting Vault server @ vault_2
    ++ nc -w 1 localhost 8220
    ++ VAULT_API_ADDR=http://127.0.0.1:8220
    ++ vault server -log-level=trace -config
    ...snip...
    
    [vault_2] Unseal vault_2
    ++ cat $demo_home/unsealKey1
    ++ vault_2 operator unseal X9ig4qSZzWrjxLuRXGsVXaQQK3uL9NI7YlHHkEsdZUI=
    ++ export VAULT_ADDR=http://127.0.0.1:8220
    ++ VAULT_ADDR=http://127.0.0.1:8220
    ++ vault operator unseal X9ig4qSZzWrjxLuRXGsVXaQQK3uL9NI7YlHHkEsdZUI=
    ...snip...
    

    Remember that the server is using the file storage backend to persist Vault data; therefore, the shamir's unseal keys, encryption key and the initial root token are created and persisted at $demo_home/vault-storage-file/ when you initialized vault_1.

    Once the sever is up and running, it joins the raft cluster by invoking vault operator raft join command. Notice that there is no target leader API address provided with the commands when you are using the raft as the ha_storage.

    [vault_2] Join the raft cluster
    ++ vault_2 operator raft join
    ++ export VAULT_ADDR=http://127.0.0.1:8220
    ++ VAULT_ADDR=http://127.0.0.1:8220
    ++ vault operator raft join
    Key       Value
    ---       -----
    Joined    true
    
    ...snip...
    
    [vault_2] List the raft cluster members
    ++ vault_2 operator raft list-peers
    ++ export VAULT_ADDR=http://127.0.0.1:8220
    ++ VAULT_ADDR=http://127.0.0.1:8220
    ++ vault operator raft list-peers
    Node       Address           State       Voter
    ----       -------           -----       -----
    vault_1    127.0.0.1:8211    leader      true
    vault_2    127.0.0.1:8221    follower    true
    
  9. You can examine the generated config-vault_2.hcl file.

    $ cat config-vault_2.hcl
    
    ha_storage "raft" {
      path    = "$demo_home/raft-vault_2/"
      node_id = "vault_2"
    }
    
    storage "file" {
      path = "$demo_home/vault-raft-file/"
    }
    
    listener "tcp" {
      address = "127.0.0.1:8220"
      cluster_address = "127.0.0.1:8221"
      tls_disable = true
    }
    
    disable_mlock = true
    api_addr = "http://127.0.0.1:8220"
    cluster_addr = "http://127.0.0.1:8221"
    
  10. Add vault_3 to the raft cluster.

    $ ./cluster.sh setup vault_3
    

    This command executes the same workflow as setup vault_2. You can see in the output that you now have a 3-nodes cluster.

    [vault_3] Join the raft cluster
    ++ vault_3 operator raft join
    ++ export VAULT_ADDR=http://127.0.0.1:8230
    ++ VAULT_ADDR=http://127.0.0.1:8230
    ++ vault operator raft join
    Key       Value
    ---       -----
    Joined    true
    
    ...snip...
    
    [vault_3] List the raft cluster members
    ++ vault_3 operator raft list-peers
    ++ export VAULT_ADDR=http://127.0.0.1:8230
    ++ VAULT_ADDR=http://127.0.0.1:8230
    ++ vault operator raft list-peers
    Node       Address           State       Voter
    ----       -------           -----       -----
    vault_1    127.0.0.1:8211    leader      true
    vault_2    127.0.0.1:8221    follower    true
    vault_3    127.0.0.1:8231    follower    true
    

    You now have a three-node cluster.

    Scenario

»Verification

Observe what happens when vault_1 steps down.

$ VAULT_ADDR=http://127.0.0.1:8210 vault operator step-down

Success! Stepped down: http://127.0.0.1:8210

Check the cluster information.

$ VAULT_ADDR=http://127.0.0.1:8210 vault operator raft list-peers

Node       Address           State       Voter
----       -------           -----       -----
vault_1    127.0.0.1:8211    follower    true
vault_2    127.0.0.1:8221    leader      true
vault_3    127.0.0.1:8231    follower    true

Now, vault_2 is the leader.

In the vault_2.log, you should find the following entries.

...
[DEBUG] ha.raft: vote granted: from=vault_1 term=3 tally=2
[INFO]  ha.raft: election won: tally=2
[INFO]  ha.raft: entering leader state: leader="Node at 127.0.0.1:8221 [Leader]"
[INFO]  ha.raft: added peer, starting replication: peer=vault_1
[INFO]  ha.raft: added peer, starting replication: peer=vault_3
[INFO]  ha.raft: pipelining replication: peer="{Voter vault_1 127.0.0.1:8211}"
...

»Clean up

When you are done you can quickly stop all services, remove all configuration and remove all modifications to your local system with the same cluster.sh script you used the setup.

Clean up your local workstation.

$ ./cluster.sh clean
Found 1 Vault service(s) matching that name
[vault_1] stopping

Found 1 Vault service(s) matching that name
[vault_2] stopping

Found 1 Vault service(s) matching that name
[vault_3] stopping

Removing configuration file $demo_home/config-vault_1.hcl

Removing configuration file $demo_home/config-vault_2.hcl

Removing configuration file $demo_home/config-vault_3.hcl

Removing raft storage file $demo_home/raft-vault_1
Removing raft storage file $demo_home/raft-vault_2
Removing raft storage file $demo_home/raft-vault_3
Removing raft storage file $demo_home/vault-storage-file
Removing key $demo_home/rootToken1
Removing key $demo_home/unsealKey1
Removing log file $demo_home/vault_1.log
Removing log file $demo_home/vault_2.log
Removing log file $demo_home/vault_3.log
Clean complete

This tutorial demontrated the ha_storage configuration which was first introduced in Vault 0.9.0. When the storage backend supports HA, a single storage backend can persist Vault data and HA coordination.

When Vault 1.4 introduced integrated storage, it could not be set as a ha_storage although it supports HA. As of Vault 1.5, integrated storage can be used as a ha_storage. This tutorial demonstrated the basic concept of how this works.

»Help and Reference