Workshops
Book a 90-minute product workshop led by HashiCorp engineers and product experts during HashiConf Digital Reserve your spot

Vault 1.4 Release Highlights

Vault HA Cluster with Integrated Storage on AWS

»Challenge

Vault supports many storage providers to persist its encrypted data (e.g. Consul, MySQL, DynamoDB, etc.). These providers require:

  • Their own administration; increasing complexity and total administration.
  • Provider configuration to allow Vault as a client.
  • Vault configuration to connect to the provider as a client.

»Solution

Use Vault's Integrated Storage to persist the encrypted data. The integrated storage has the following benefits:

  • Integrated into Vault (reducing total administration)
  • All configuration within Vault
  • Supports failover and multi-cluster replication
  • Eliminates additional network requests
  • Performance gains (reduces disk write/read overhead)
  • Lowers complexity when diagnosing issues (leading to faster time to recovery)

Reference Architecture

»Prerequisites

This guide requires an AWS account, Terraform, and additional configuration to create the cluster.

  • First, create an AWS account with AWS credentials and a EC2 key pair

  • Next, install Terraform

  • Next, retrieve the configuration by cloning or downloading the hashicorp/vault-guides repository from GitHub.

    Clone the repository.

    $ git clone https://github.com/hashicorp/vault-guides.git
    

    Or download the repository.

    Download

    This repository contains supporting content for all of the Vault learn guides. The content specific to this guide can be found within a sub-directory.

  • Finally, go into the vault-guides/operations/raft-storage/aws directory.

    $ cd vault-guides/operations/raft-storage/aws
    

»Setup

The Terraform files start four instances each running Vault. Here's a diagram:

Scenario

  • vault_1 is initialized and unsealed. The root token creates a transit key that enables the other Vaults auto-unseal. This Vault does not join the cluster.
  • vault_2 is initialized and unsealed. This Vault starts as the cluster leader. An example K/V-V2 secret is created.
  • vault_3 is only started. You will join it to the cluster.
  • vault_4 is only started. You will join it to the cluster.
  1. Set your AWS credentials as environment variables.

    $ export AWS_ACCESS_KEY_ID = "<YOUR_AWS_ACCESS_KEY_ID>"
    $ export AWS_SECRET_ACCESS_KEY = "<YOUR_AWS_SECRET_ACCESS_KEY>"
    
  2. Copy terraform.tfvars.example and rename to terraform.tfvars

    $ cp terraform.tfvars.example terraform.tfvars
    
  3. Edit terraform.tfvars to override the default settings that describe your environemnt.

    # AWS EC2 Region
    # default: 'us-east-1'
    # @see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
    aws_region = "us-east-1"
    
    # AWS EC2 Availability Zone
    # default: 'us-east-1a'
    # @see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#using-regions-availability-zones-launching
    availability_zones = "us-east-1a"
    
    # AWS EC2 Key Pair
    # @see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
    key_name = "learn-vault-key"
    
    # Specify a name here to tag all instances
    # default: 'learn-vault-raft_storage'
    environment_name = "learn-vault"
    
  4. Initialize Terraform.

    $ terraform init
    
    Initializing modules...
    Downloading terraform-aws-modules/vpc/aws 2.21.0 for vault_demo_vpc...
      - vault_demo_vpc in .terraform/modules/vault_demo_vpc/terraform-aws-modules-terraform-aws-vpc-2417f60
    
    Initializing the backend...
    
    Initializing provider plugins...
      - Checking for available provider plugins...
      - Downloading plugin for provider "template" (hashicorp/template) 2.1.2...
      - Downloading plugin for provider "aws" (hashicorp/aws) 2.40.0...
    ...
    
  5. Apply the Terraform plan with automatic approval.

    $ terraform apply -auto-approve
    ...
    Apply complete! Resources: 20 added, 0 changed, 0 destroyed.
    

    The Terraform output will display the IP addresses of the provisioned Vault nodes.

    Example:

    vault_1 (13.56.238.70) | internal (10.0.101.21)
      - Initialized and unsealed.
      - The root token creates a transit key that enables the other Vaults to auto-unseal.
      - Does not join the High-Availability (HA) cluster.
    
    vault_2 (13.57.14.206) | internal (10.0.101.22)
      - Initialized and unsealed.
      - The root token and recovery key is stored in /tmp/key.json.
      - K/V-V2 secrets engines enabled and secret stored.
      - Leader of HA cluster
    
      $ ssh -l ubuntu 13.57.14.206 -i <path/to/key.pem>
    
      # Root token
      $ ssh -l ubuntu 13.57.14.206 -i <path/to/key.pem> "cat ~/root_token"
      # Recovery key
      $ ssh -l ubuntu 13.57.14.206 -i <path/to/key.pem> "cat ~/recovery_key"
    
    vault_3 (54.183.135.252) | internal (10.0.101.23)
      - Started
      - You will join it to cluster started by vault_2
    
      $ ssh -l ubuntu 54.183.135.252 -i <path/to/key.pem>
    
    vault_4 (13.57.238.164) | internal (10.0.101.24)
      - Started
      - You will join it to cluster started by vault_2
    
      $ ssh -l ubuntu 13.57.238.164 -i <path/to/key.pem>
    

»Create an HA cluster

Currently vault_2 is initialized, unsealed, and has HA enabled. It is the only node in a cluster. The remaining nodes, vault_3 and vault_4, have not joined its cluster.

»Examine the leader

Lets discover more about the configuration of vault_2 and how it describes the current state of the cluster.

  1. Open a new terminal and SSH into vault_2.

    $ ssh -l ubuntu 13.57.14.206 -i <path/to/key.pem>
    
  2. Examine the vault_2 server configuration file (/etc/vault.d/vault.hcl).

    $ sudo cat /etc/vault.d/vault.hcl
    
    storage "raft" {
      path    = "/vault/vault_2"
      node_id = "vault_2"
    }
    
    listener "tcp" {
      address     = "0.0.0.0:8200"
      cluster_address     = "0.0.0.0:8201"
      tls_disable = true
    }
    
    seal "transit" {
      address            = "http://10.0.101.21:8200"
      token              = "root"
      disable_renewal    = "false"
    
      // Key configuration
      key_name           = "unseal_key"
      mount_path         = "transit/"
    }
    
    api_addr = "http://13.57.14.206:8200"
    cluster_addr = "http://10.0.101.22:8201"
    disable_mlock = true
    ui=true
    

    To use the integrated storage, the storage stanza is set to raft. The path specifies the path where Vault data will be stored (/vault/vault_2).

  3. Configure the vault CLI to use the root token for requests.

    $ export VAULT_TOKEN=$(cat ~/root_token)
    
  4. View the Raft configuration information.

    $ vault operator raft list-peers
    
    Node       Address             State     Voter
    ----       -------             -----     -----
    vault_2    10.0.101.22:8201    leader    true
    

    The cluster reports that vault_2 is the only node and is currently the leader.

»Join nodes to the cluster

Add vault_3 to the cluster using the vault operator raft join command.

  1. Open a new terminal and SSH into vault_3.

    $ ssh -l ubuntu 54.183.135.252 -i <path/to/key.pem>
    
  2. Join vault_3 to the vault_2 cluster.

    $ vault operator raft join http://vault_2:8200
    Key       Value
    ---       -----
    Joined    true
    

    When a node joins the cluster it receives a challenge from the leader node and must unseal itself to answer this challenge. This node unseals itself through vault_1, via the transit secrets engine unseal method, and then correctly answers the challenge.

  3. Configure the vault CLI to use vault_2 root token for requests which is stored in the ~/root_token file on the vault_2 host. (Be sure to retrieve the ~/root_token on the vault_2 host.)

    $ export VAULT_TOKEN="s.lZMMSsFkuz4KAsJlFTNA3myK"
    
  4. Examine the current raft peer set.

    $ vault operator raft list-peers
    
    Node       Address             State       Voter
    ----       -------             -----       -----
    vault_2    10.0.101.22:8201    leader      true
    vault_3    10.0.101.23:8201    follower    true
    

    Now, vault_3 is listed as a follower node.

  1. Verify that you can read the secret at kv/apikey.

    $ vault kv get kv/apikey
    
    ====== Metadata ======
    Key              Value
    ---              -----
    created_time     2020-03-25T07:23:09.944332061Z
    deletion_time    n/a
    destroyed        false
    version          1
    
    ===== Data =====
    Key       Value
    ---       -----
    webapp    ABB39KKPTWOR832JGNLS02
    

    This node has access to the secrets defined within the cluster of which it is a member.

»Retry join

Similarly, you can use the vault operator raft join command to join vault_4 to the cluster. However, if the connection details of all the nodes are known beforehand, you can configure the retry_join stanza in the server configuration file to automatically join the cluster.

  1. Open a new terminal and SSH into vault_4.

    $ ssh -l ubuntu 13.57.238.164 -i <path/to/key.pem>
    
  2. Stop vault_4.

    $ sudo systemctl stop vault
    
  3. Open the server configuration file, /etc/vault.d/vault.hcl in a text editor of your choice.

    Example:

    $ sudo vi /etc/vault.d/vault.hcl
    
  4. Add the retry_join block inside the storage stanza as follows. Be sure to set the correct leader_api_addr value for vault_2 and vault_3 using their public IP addresses.

    storage "raft" {
      path    = "/vault/vault_4"
      node_id = "vault_4"
      retry_join {
        leader_api_addr = "http://vault_2:8200"
      }
      retry_join {
        leader_api_addr = "http://vault_3:8200"
      }
    }
    
    ## ...snipped...
    

    Since the address of vault_2 and vault_3 are known, you can predefine the possible cluster leader addresses in the retry_join block.

  5. Start vault_4.

    $ sudo systemctl start vault
    
  6. Configure the vault CLI to use vault_2 root token for requests which is stored in the ~/root_token file on the vault_2 host. (Be sure to retrieve the ~/root_token on the vault_2 host.)

    $ export VAULT_TOKEN="s.lZMMSsFkuz4KAsJlFTNA3myK"
    
  7. List the peers and notice that vault_4 is listed as a follower node.

    $ vault operator raft list-peers
    
    Node       Address             State       Voter
    ----       -------             -----       -----
    vault_2    10.0.101.22:8201    leader      true
    vault_3    10.0.101.23:8201    follower    true
    vault_4    10.0.101.24:8201    follower    true
    
  8. Configure the vault CLI to use vault_2 root token for requests which is stored in the ~/root_token file on the vault_2 host.

    $ export VAULT_TOKEN="s.lZMMSsFkuz4KAsJlFTNA3myK"
    
  9. Patch the secret at kv/apikey.

    $ vault kv patch kv/apikey expiration="365 days"
    
    Key              Value
    ---              -----
    created_time     2020-03-25T07:31:33.77995502Z
    deletion_time    n/a
    destroyed        false
    version          2
    

    The secret has updated for all nodes.

  10. To verify, return to the terminal connected to vault_3 and get the same secret again.

    $ vault kv get kv/apikey
    
    ====== Metadata ======
    Key              Value
    ---              -----
    created_time     2020-03-25T07:31:33.77995502Z
    deletion_time    n/a
    destroyed        false
    version          2
    
    ======= Data =======
    Key           Value
    ---           -----
    expiration    365 days
    webapp        ABB39KKPTWOR832JGNLS02
    

»Raft snapshots for data recovery

Raft provides an interface to take snapshots of its data. These snapshots can be used later to restore data if ever becomes necessary.

»Take a snapshot

Execute the following command to take a snapshot of the data.

$ vault operator raft snapshot save demo.snapshot

»Simulate loss of data

First, verify that a secrets exists at kv/apikey.

$ vault kv get kv/apikey

Next, delete the secrets at kv/apikey.

$ vault kv metadata delete kv/apikey

Finally, verify that the data has been deleted.

$ vault kv get kv/apikey
No value found at kv/data/apikey

»Restore data from a snapshot

First, recover the data by restoring the data found in demo.snapshot.

$ vault operator raft snapshot restore demo.snapshot

(Optional) Examine the server log of the active node (vault_2).

$ sudo journalctl --no-pager -u vault

Verify that the data has been recovered.

$ vault kv get kv/apikey

====== Metadata ======
Key              Value
---              -----
created_time     2020-03-25T07:31:33.77995502Z
deletion_time    n/a
destroyed        false
version          2

======= Data =======
Key           Value
---           -----
expiration    365 days
webapp        ABB39KKPTWOR832JGNLS02

»Resign from active duty

Currently, vault_2 is the active node. Experiment to see what happens if vault_2 steps down from its active node duty.

In the vault_2 terminal, execute the step-down command.

$ vault operator step-down

Now, examine the current raft peer set.

$ vault operator raft list-peers

Node       Address             State       Voter
----       -------             -----       -----
vault_2    10.0.101.22:8201    follower    true
vault_3    10.0.101.23:8201    leader      true
vault_4    10.0.101.24:8201    follower    true

Notice that vault_3 is now promoted to be the leader and vault_2 became a follower.

»Remove a cluster member

It may become important to remove nodes from the cluster for maintenance, upgrades, or to preserve compute resources.

Remove vault_4 from the cluster.

$ vault operator raft remove-peer vault_4
Peer removed successfully!

Verify that vault_4 has been removed from the cluster by viewing the raft configuration.

$ vault operator raft list-peers

Node       Address             State       Voter
----       -------             -----       -----
vault_2    10.0.101.22:8201    follower    true
vault_3    10.0.101.23:8201    leader      true

»Recovery mode for troubleshooting

In the case of an outage caused by corrupt entries in the storage backend, an operator may need to start Vault in recovery mode. In this mode, Vault runs with minimal capabilities and exposes a subset of its API.

»Simulate outage

Stop the Vault service on all remaining cluster members, vault_2 and vault_3, to simulate an outage.

Return to the terminal where you SSH'd into vault_2 and stop the Vault service.

Stop Vault on vault_2.

$ sudo systemctl stop vault

Return to the terminal where you SSH'd into vault_3 and stop the Vault service.

$ sudo systemctl stop vault

»Start in recovery mode

Return to the terminal where you SSH'd into vault_3 and start Vault in recovery mode.

$ sudo systemctl start vault@-recovery

The content after the @ symbol is appended to the vault server command executed by this service. This is equivalent to running the vault server -config /etc/vault.d -recovery.

View the status of the vault@-recovery service.

$ sudo systemctl status vault@-recovery
● vault@-recovery.service - Vault
   Loaded: loaded (/etc/systemd/system/vault@.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-12-11 22:45:36 UTC; 18min ago
  Process: 5907 ExecStartPre=/sbin/setcap cap_ipc_lock=+ep /usr/local/bin/vault (code=exited, status=0/SUCCESS)
 Main PID: 5916 (vault)
    Tasks: 7 (limit: 1152)
   CGroup: /system.slice/system-vault.slice/vault@-recovery.service
           └─5916 /usr/local/bin/vault server -config /etc/vault.d -recovery

Dec 11 22:45:36 ip-10-0-101-244 vault[5916]:                Log Level: info
Dec 11 22:45:36 ip-10-0-101-244 vault[5916]:            Recovery Mode: true
Dec 11 22:45:36 ip-10-0-101-244 vault[5916]:                  Storage: raft
Dec 11 22:45:36 ip-10-0-101-244 vault[5916]:                  Version: Vault v1.3.0
Dec 11 22:45:36 ip-10-0-101-244 vault[5916]: ==> Vault server started! Log data will stream in below:
Dec 11 22:45:36 ip-10-0-101-244 vault[5916]: 2019-12-11T22:45:36.850Z [INFO]  proxy environment: http_proxy= https_proxy= no_proxy=
Dec 11 22:45:36 ip-10-0-101-244 vault[5916]: 2019-12-11T22:45:36.857Z [INFO]  seal-transit: unable to renew token, disabling renewal: err="Error making API request.
Dec 11 22:45:36 ip-10-0-101-244 vault[5916]: URL: PUT http://10.0.101.22:8200/v1/auth/token/renew-self
Dec 11 22:45:36 ip-10-0-101-244 vault[5916]: Code: 400. Errors:
Dec 11 22:45:36 ip-10-0-101-244 vault[5916]: * lease is not renewable"

»Create a recovery operational token

  1. Generate a temporary one-time password (OTP).

    $ vault operator generate-root -generate-otp -recovery-token
    dNsrrcQLSvEDsNAfOUdRN3ECGI
    
  2. Start the generation of the recovery token with the OTP.

    $ vault operator generate-root -init \
         -otp=dNsrrcQLSvEDsNAfOUdRN3ECGI -recovery-token
    
    Nonce         fae5045e-4a6a-729c-92b2-1cce79af5afb
    Started       true
    Progress      0/1
    Complete      false
    OTP Length    26
    
  3. View the recovery key that was generated during the setup of vault_2. In the vault_2 SSH session terminal, read the recovery key value.

    $ cat ~/recovery_key
    dABZMe9xbAPx5MisJXSbn4UL0E6ZaH13iVF/JlgZGNM=
    

    -> NOTE: Recovery key is used instead of unseal key since this cluster has Transit auto-unseal configured.

  4. Create an encoded token.

    $ vault operator generate-root -recovery-token
    
    Operation nonce: fae5045e-4a6a-729c-92b2-1cce79af5afb
    Unseal Key (will be hidden):
    

    Enter the recovery key when prompted. The output looks simiar to below.

    Nonce            fae5045e-4a6a-729c-92b2-1cce79af5afb
    Started          true
    Progress         1/1
    Complete         true
    Encoded Token    FmA0Sio6E3Q+EnITOR4IAgQtKj0CSzQkETA
    
  5. Complete the creation of a recovery token with the Encoded Token value and OTP.

    $ vault operator generate-root \
      -decode=FmA0Sio6E3Q+EnITOR4IAgQtKj0CSzQkETA \
      -otp dNsrrcQLSvEDsNAfOUdRN3ECGI \
      -recovery-token
    
    r.G8XYB8md7WJPIdKxNoLxqgVy
    

»Fix the issue in the storage backend

In recovery mode Vault launches with a minimal API enabled. In this mode you are able to interact with the raw system backend.

Use the recovery token to list the contents at sys/raw/sys.

$ VAULT_TOKEN=r.G8XYB8md7WJPIdKxNoLxqgVy vault list sys/raw/sys
Keys
----
counters/
policy/
token/

Imagine in your investigation you discover that a value at a particular path is the cause of the outage. To simulate this, assume that the value found at the path sys/raw/sys/counters is the cause of the outage.

Delete the path at sys/raw/sys/counters.

$ VAULT_TOKEN=r.G8XYB8md7WJPIdKxNoLxqgVy vault delete sys/raw/sys/counters
Success! Data deleted (if it existed) at: sys/raw/sys/counters

»Resume normal operations

First, stop the vault@-recovery service.

$ sudo systemctl stop vault@-recovery

Next, restart Vault service for vault_2 and vault_3.

$ sudo systemctl start vault

»Clean up

Return to the first terminal where you created the cluster and use Terraform to destroy the cluster.

Destroy the AWS resources provisioned by Terraform.

$ terraform destroy -auto-approve

Delete the state file.

$ rm *tfstate*

»Help and reference