Book a 90-minute product workshop led by HashiCorp engineers and product experts during HashiConf Digital Reserve your spot

Leverage Nomad's Vault Integration

Vault Integration and Retrieving Dynamic Secrets

Nomad can deploy applications while quickly and safely retrieving dynamic credentials, because Nomad integrates seamlessly with Vault--allowing your application to retrieve dynamic credentials for various tasks.

In this guide, you will deploy a web application that needs to authenticate against PostgreSQL to display data from a table to the user.

This guide will demonstrate

  • Deploying a development Vault server

  • Configuring the Nomad cluster's nodes to integrate with a Vault server

  • Using the appropriate templating syntax to retrieve credentials from Vault

  • Storing those credentials in the secrets task directory to be consumed by the Nomad task


To perform the tasks described in this guide, you need to have a Nomad environment with Consul and Vault installed. You can use this Terraform environment to easily provision a sandbox environment. This guide will assume a cluster with one server node and three client nodes.

»Deploy a development Vault server

This guide is designed for operators that are using a play-instance of Vault; however with some changes, you can perform these steps in a real cluster.

  • If you are connecting to an existing Vault server and have a token that enables you to create roles you can skip down to "Log in to Vault"

  • If you are connecting to an existing Vault server and you are unable to create roles, please work with your operations team and have the appropriate personnel run from "Write a policy for Nomad server tokens" to "Generate the token for the Nomad server"

  • A Vault operator will also need to run the "Enable and configure the Database secrets engine" steps

  • If you are running a play-instance, start the Vault service. You can use an interactive session running in a terminal, a background process running via nohup, or a systemd system-unit.

Once Vault is up and responds to vault status commands, continue on.

»Initialize Vault server

Run the following command to initialize a Vault server and receive an unseal key and initial root token. Be sure to note the unseal key and initial root token as you will need these two pieces of information.

$ vault operator init -key-shares=1 -key-threshold=1

The vault operator init command above creates a single Vault unseal key for convenience. For a production environment, it is recommended that you create at least five unseal key shares and securely distribute them to independent operators. The vault operator init command defaults to five key shares and a key threshold of three. If you provisioned more than one server, the others will become standby nodes but should still be unsealed.

»Unseal Vault

Run the following command and then provide your unseal key to Vault.

$ vault operator unseal

The output of unsealing Vault will look similar to the following:

Key                    Value
---                    -----
Seal Type              shamir
Initialized            true
Sealed                 false
Total Shares           1
Threshold              1
Version                0.11.4
Cluster Name           vault-cluster-d12535e5
Cluster ID             49383931-c782-fdc6-443e-7681e7b15aca
HA Enabled             true
HA Cluster             n/a
HA Mode                standby
Active Node Address    <none>

»Log in to Vault

Use the login command to authenticate yourself against Vault using the initial root token you received earlier. You will need to authenticate to run the necessary commands to write policies, create roles, and configure a connection to your database.

$ vault login <your initial root token>

If your login is successful, you will receive output similar to what is shown below:

Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

»Write a policy for Nomad server tokens

To use the Vault integration, you must provide a Vault token to your Nomad servers. Although you can provide your root token to easily get started, the recommended approach is to use a token role based token. This first requires writing a policy that you will attach to the token you provide to your Nomad servers. By using this approach, you can limit the set of policies that tasks managed by Nomad can access.

For this exercise, use the following policy for the token you will create for your Nomad server. Place this policy in a file named nomad-server-policy.hcl.

# Allow creating tokens under "nomad-cluster" token role. The token role name
# should be updated if "nomad-cluster" is not used.
path "auth/token/create/nomad-cluster" {
  capabilities = ["update"]

# Allow looking up "nomad-cluster" token role. The token role name should be
# updated if "nomad-cluster" is not used.
path "auth/token/roles/nomad-cluster" {
  capabilities = ["read"]

# Allow looking up the token passed to Nomad to validate # the token has the
# proper capabilities. This is provided by the "default" policy.
path "auth/token/lookup-self" {
  capabilities = ["read"]

# Allow looking up incoming tokens to validate they have permissions to access
# the tokens they are requesting. This is only required if
# `allow_unauthenticated` is set to false.
path "auth/token/lookup" {
  capabilities = ["update"]

# Allow revoking tokens that should no longer exist. This allows revoking
# tokens for dead tasks.
path "auth/token/revoke-accessor" {
  capabilities = ["update"]

# Allow checking the capabilities of our own token. This is used to validate the
# token upon startup.
path "sys/capabilities-self" {
  capabilities = ["update"]

# Allow our own token to be renewed.
path "auth/token/renew-self" {
  capabilities = ["update"]

You can now write a policy called nomad-server by running the following command.

$ vault policy write nomad-server nomad-server-policy.hcl

You should receive the following output.

Success! Uploaded policy: nomad-server

You will generate the actual token in the next few steps.

»Create a token role

At this point, you must create a Vault token role that Nomad can use. The token role allows you to limit what Vault policies are accessible by jobs submitted to Nomad. You will use the following token role.

  "allowed_policies": "access-tables",
  "token_explicit_max_ttl": 0,
  "name": "nomad-cluster",
  "orphan": true,
  "token_period": 259200,
  "renewable": true

Please notice that the access-tables policy is listed under the allowed_policies key. You have not created this policy yet, but it will be used by the job to retrieve credentials to access the database. A job running in this Nomad cluster will only be allowed to use the access-tables policy.

If you would like to allow all policies to be used by any job in the Nomad cluster except for the ones you specifically prohibit, then use the disallowed_policies key instead and simply list the policies that should not be granted. If you take this approach, be sure to include nomad-server in the disallowed policies group. An example of this is shown below:

  "disallowed_policies": "nomad-server",
  "token_explicit_max_ttl": 0,
  "name": "nomad-cluster",
  "orphan": true,
  "token_period": 259200,
  "renewable": true

Save the policy in a file named nomad-cluster-role.json and create the token role named nomad-cluster.

$ vault write /auth/token/roles/nomad-cluster @nomad-cluster-role.json

You should receive the following output:

Success! Data written to: auth/token/roles/nomad-cluster

»Generate the token for the Nomad server

Run the following command to create a token for your Nomad server:

$ vault token create -policy nomad-server -period 72h -orphan

The -orphan flag is included when generating the Nomad server token above to prevent revocation of the token when its parent expires. Vault typically creates tokens with a parent-child relationship. When an ancestor token is revoked, all of its descendant tokens and their associated leases are revoked as well.

If everything works, you should have output similar to the following:

Key                  Value
---                  -----
token                1gr0YoLyTBVZl5UqqvCfK9RJ
token_accessor       5fz20DuDbxKgweJZt3cMynya
token_duration       72h
token_renewable      true
token_policies       ["default" "nomad-server"]
identity_policies    []
policies             ["default" "nomad-server"]

»Configure Nomad to enable Vault integration

At this point, you are ready to edit the vault stanza in the Nomad Server's configuration file located at /etc/nomad.d/nomad.hcl. Provide the token you generated in the previous step in the vault stanza of your Nomad server configuration. The token can also be provided as an environment variable called VAULT_TOKEN. Be sure to specify the nomad-cluster-role in the create_from_role option. If using Vault Namespaces, modify both the client and server configuration to include the namespace; alternatively, it can be provided in the environment variable VAULT_NAMESPACE. After following these steps and enabling Vault, the vault stanza in your Nomad server configuration will be similar to what is shown below.

vault {
  enabled = true
  address = "http://active.vault.service.consul:8200"
  task_token_ttl = "1h"
  create_from_role = "nomad-cluster"
  token = "<your nomad server token>"
  namespace = "<vault namespace for the cluster>"

Restart the Nomad server.

$ sudo systemctl restart nomad

NOTE: Nomad servers will renew the token automatically.

Vault integration needs to be enabled on the client nodes as well. If you are using the Terraform environment, this has been configured for you already. You will see the vault stanza in your Nomad clients' configuration (located at /etc/nomad.d/nomad.hcl) looks similar to the following:

vault {
  enabled = true
  address = "http://active.vault.service.consul:8200"

Note that the Nomad clients do not need to be provided with a Vault token.

»Deploy database

The next few steps will involve configuring a connection between Vault and the database, you can use Nomad to deploy a database server to connect to. Create a Nomad job called db.nomad with the following content:

job "postgres-nomad-demo" {
  datacenters = ["dc1"]

  group "db" {

    task "server" {
      driver = "docker"

      config {
        image = "hashicorp/postgres-nomad-demo:latest"
        port_map {
          db = 5432
      resources {
        network {
          port  "db"{
        static = 5432

      service {
        name = "database"
        port = "db"

        check {
          type     = "tcp"
          interval = "2s"
          timeout  = "2s"

Run the job as shown below.

$ nomad run db.nomad

Verify the job is running with the following command.

$ nomad status postgres-nomad-demo

The result of the status command will look similar to the output below.

ID            = postgres-nomad-demo
Name          = postgres-nomad-demo
Submit Date   = 2018-11-15T21:01:00Z
Type          = service
Priority      = 50
Datacenters   = dc1
Status        = running
Periodic      = false
Parameterized = false

Task Group  Queued  Starting  Running  Failed  Complete  Lost
db          0       0         1        0       0         0

ID        Node ID   Task Group  Version  Desired  Status   Created    Modified
701e2699  5de1330c  db          0        run      running  1m56s ago  1m33s ago

»Enable and configure the Database secrets engine

Now you can move on to configuring the connection between Vault and the database.

»Enable the Database secrets engine

You are using the database secrets engine for Vault in this exercise so that you can generate dynamic credentials for the PostgreSQL database. Run the following command to enable it.

$ vault secrets enable database

If the previous command was successful, you will see the following output:

Success! Enabled the database secrets engine at: database/

»Configure the Database secrets engine

Create a file named connection.json with the following content.

  "plugin_name": "postgresql-database-plugin",
  "allowed_roles": "accessdb",
  "connection_url": "postgresql://{{username}}:{{password}}@database.service.consul:5432/postgres?sslmode=disable",
  "username": "postgres",
  "password": "postgres123"

The information above allows Vault to connect to the database and create users with specific privileges. You will specify the accessdb role soon. In a production setting, it is recommended to give Vault credentials with enough privileges to generate database credentials dynamically and and manage their lifecycle.

Run the following command to configure the connection between the database secrets engine and the database.

$ vault write database/config/postgresql @connection.json

If the operation is successful, there will be no output.

»Create a Vault role to manage database privileges

Recall from the previous step that you specified accessdb in the allowed_roles key of the connection information. Set up that role now. Create a file called accessdb.sql with the following content:

GRANT ALL ON SCHEMA public TO "{{name}}";

The SQL above will be used in the creation_statements parameter of the next command to specify the privileges that the dynamic credentials being generated will possess. In this case, the dynamic database user will have broad privileges that include the ability to read from the tables that the application will need to access.

Run the following command to create the role.

$ vault write database/roles/accessdb db_name=postgresql \
creation_statements=@accessdb.sql default_ttl=1h max_ttl=24h

You should see receive following output after running the previous command.

Success! Data written to: database/roles/accessdb

»Generate PostgreSQL credentials

You should now be able to generate dynamic credentials to access your database. Run the following command to generate a set of credentials:

$ vault read database/creds/accessdb

The previous command should return output similar to what is shown below:

Key                Value
---                -----
lease_id           database/creds/accessdb/3JozEMSMqw0vHHhvla15sKTW
lease_duration     1h
lease_renewable    true
password           A1a-3pMGjpDXHZ2Qzuf7
username           v-root-accessdb-5LA65urB4daA8KYy2xku-1542318363

Congratulations! You have configured Vault's connection to your database and can now generate credentials with the previously specified privileges. Next, you will deploy the application and make sure that it will be able to communicate with Vault and obtain the credentials as well.

»Create access-tables policy for your job

Recall from the Create a token role that you specified a policy named access-tables in the allowed_policies section of the token role. You will create this policy now and give it the capability to read from the database/creds/accessdb endpoint (the same endpoint you read from in the previous step to generate credentials for the database). You will then specify this policy in the Nomad job which will allow it to retrieve credentials for itself to access the database.

On the Vault server (which could be co-located on the Nomad node), create a file named access-tables-policy.hcl with the following content:

path "database/creds/accessdb" {
  capabilities = ["read"]

Create the access-tables policy with the following command:

$ vault policy write access-tables access-tables-policy.hcl

You should see the following output:

Success! Uploaded policy: access-tables

»Deploy your job with the appropriate policy and templating

Now you are ready to deploy the web application and give it the necessary policy and configuration to communicate with the database. Create a file called web-app.nomad and save the following content in it.

job "nomad-vault-demo" {
  datacenters = ["dc1"]

  group "demo" {
    task "server" {

      vault {
        policies = ["access-tables"]

      driver = "docker"
      config {
        image = "hashicorp/nomad-vault-demo:latest"
        port_map {
          http = 8080

        volumes = [

      template {
        data = <<EOF
{{ with secret "database/creds/accessdb" }}
    "host": "database.service.consul",
    "port": 5432,
    "username": "{{ .Data.username }}",
    {{ /* Ensure password is a properly escaped JSON string. */ }}
    "password": {{ .Data.password | toJSON }},
    "db": "postgres"
{{ end }}
        destination = "secrets/config.json"

      resources {
        network {
          port "http" {}

      service {
        name = "nomad-vault-demo"
        port = "http"

        tags = [

        check {
          type     = "tcp"
          interval = "2s"
          timeout  = "2s"

There are a few key points to note here:

  • The job specifies the access-tables policy in the vault stanza of this job. The Nomad client will receive a token with this policy attached. Recall from the previous step that this policy will allow the application to read from the database/creds/accessdb endpoint in Vault and retrieve credentials.

  • The job uses the template stanza's vault integration to populate the JSON configuration file that the application needs. The underlying tool being used is Consul Template. You can use Consul Template's documentation to learn more about the syntax needed to interact with Vault. Please note that although the job defines the template inline, you can use the template stanza in conjunction with the artifact stanza to download an input template from a remote source such as an S3 bucket.

  • The job templates use the toJSON function to ensure the password is encoded as a JSON string. Any templated value which may contain special characters (like quotes or newlines) should be passed through the toJSON function.

  • Finally, note that that destination of the template is the secrets/ task directory. This ensures the data is not accessible with a command like nomad alloc fs or filesystem APIs.

Use the following command to run the job:

$ nomad run web-app.nomad

»Confirm the Application is Accessing the Database

At this point, you can visit your application at the path /names to confirm the appropriate data is being accessed from the database and displayed to you. There are several ways to do this.

  • Use the dig command to query the SRV record of your service and obtain the port it is using. Then curl your service at the appropriate port and names path.
$ dig +short SRV nomad-vault-demo.service.consul

The output of the dig command indicates the port the service is on in column three.

1 1 30478 ip-172-31-58-230.node.dc1.consul.

This output indicates that the service is at port 30478. This port will vary for each run of the job.

$ curl nomad-vault-demo.service.consul:30478/names

If everything is working correctly, you will receive the following HTML output.

<!DOCTYPE html>

<h1> Welcome! </h1>
<h2> If everything worked correctly, you should be able to see a list of names
below </h2>


<h4> John Doe </h4>

<h4> Peter Parker </h4>

<h4> Clifford Roosevelt </h4>

<h4> Bruce Wayne </h4>

<h4> Steven Clark </h4>

<h4> Mary Jane </h4>

  • You can also deploy fabio and visit any Nomad client at its public IP address using a fixed port. The details of this method are beyond the scope of this guide, but you can refer to the Load Balancing with Fabio guide for more information on this topic. Alternatively, you could use the nomad alloc status command along with the AWS console to determine the public IP and port your service is running (remember to open the port in your AWS security group if you choose this method).

Web Service

»Next steps

In this guide you deployed a PostgreSQL with Nomad as a job. You then created and secured the login credentials with Vault.

»Reference Material