This guide explains the concepts, options, and considerations for delivering a Vault cluster providing a centralized static secrets store. Use this as a supplemental guide to Reference Architecture and Deployment Guide to deliver a pattern for a common Vault use case.
- Reference Architecture covers the recommended production Vault cluster architecture
- Deployment Guide covers how to install and configure Vault for production use
- K/V Secrets Engine is used to store static secrets within the configured physical storage for Vault
- Auth Methods are used to authenticate users and machines with Vault
- Consul Template is used to access static secrets stored in Vault and provide them to the applications and services that require them
- You have followed the Reference Architecture for Vault to provision the necessary resources for a highly-available Vault cluster.
- You have followed the Deployment Guide for Vault to install and configure Vault on each Vault server.
- You have followed the Production Hardening Guide for Vault to improve the Vault cluster's security.
- Vault is unsealed. See the documentation on unsealing.
»Use Vault to store and retrieve secrets
At this stage, your Vault implementation should be initialized and unsealed but does not contain the data needed for it to function. While Vault can perform many functions around cryptography and secrets management, the initial use-case is often the manual creation of static secrets and secure storage and retrieval.
This guide will:
- Discuss authentication methods and what you should consider using
- Discuss static secrets and what you should consider when laying out your secrets schema
- Discuss policies and what you should consider when creating them
- Discuss secret consumption and what you should consider when planning this strategy
kv secrets engine stores arbitrary static secrets. Secrets can be anything from passwords to database connection strings to API keys. Vault stores these securely, and the nature of the static secret is unimportant. Still, there are several considerations around creating, reading, updating, and deleting secrets that must be decided first, namely who or what has permission to perform any of the above functions on any particular secret.
Vault handles this through the use of authentication and policy:
- Authentication - Any interaction with Vault must be first authenticated. Authentication gives permission for a Vault client (e.g. user, machine, or application) to interact with Vault. It does not define any permissions on what that entity can look at inside Vault.
- Policy - Policies are associated with Vault clients and define both what secrets the authenticated client can interact with and in what ways. For example:
Vault authentication and policies are covered in more detail in the scenario of secure storage and retrieval of static secrets.
»Vault authentication considerations
In a scenario of a human user needing to add static secrets to Vault for machines or applications to access, the first thing that must happen is the human (or machine) needs to authenticate with Vault. For a user to authenticate, an administrator must configure Vault with an auth method. There are several authentication methods supported by Vault providing multiple options for configuration.
There are some considerations to take into account when choosing an authentication strategy. A human user will likely not authenticate with Vault in the same manner as a programmatic access request or application.
»User lifecycle management
If your organization already has a user authentication system tied into the user lifecycle and Vault supports it (e.g. LDAP), you should consider using this as an auth method. In this case, you only have one place to manage your users, and users are less likely to get stranded in Vault either in a group they have moved from or as a legacy user that no longer works in the organization.
Defining what needs to authenticate with your Vault server early on is useful for several reasons. While you can enable additional auth methods at any point in the Vault lifecycle, the vault cluster needs to be in a secure location with access managed by firewalls, security groups, etc. Because of this, having to readjust your network topology to fit another auth method can be complicated, so base the initial placement on the balance of security and access requirements.
If Vault clients are running in a cloud and an auth method is available for the platform (e.g. AWS, Azure, Google Cloud), the recommendation is to use the cloud provider specific auth method to verify the machine's identity. Its drawback is that if you have multiple services running on a single machine, these services all share the same authentication and, therefore, the same access to secrets. This issue is not a factor if you are running microservices or single service instances.
There are several auth methods available for applications to authenticate with Vault. Choose the auth method that makes the most sense for your applications. If your applications are running on a Kubernetes cluster, the
kubernetes auth method may be the right choice. For others, AppRole auth method might work well.
The recommendation is to create a Vault admin user in your user authentication or the
userpass method and assign them admin privileges.
As mentioned above, both humans and programmatic applications may need to authenticate to Vault in your enterprise and an example of each will be discussed.
While Vault can support its own database of usernames and passwords using the userpass method, this has drawbacks as with anything where the source of authority is split or duplicated. Enterprises will likely have a database of users and an authentication method for them, so Vault can be configured to delegate authentication to that identity source. In these cases, Vault does not store or replicate the auth database, but delegates authentication to the configured method.
LDAP is a widely used enterprise method of user and group management. Vault's LDAP auth method also supports Microsoft Active Directory which is a derivative of LDAP and follows similar methods.
The Vault documentation provides a guide for setting up Vault to use LDAP authentication.
»Machine (programmatic) authentication
Machine authentication is a secure and reliable way to authenticate when using cloud-based infrastructure and Vault provides auth methods for all the major vendors. This works by treating the cloud provider as a Trusted Third Party on the identity of the instance. Each cloud provider handles this in slightly different ways. Still, the workflow is generally the same where the instance provides some identifying metadata, which Vault uses when querying the provider to verify the instance. Once Vault is satisfied that the instance is valid, it returns a token to the instance to use for subsequent requests to Vault.
A good example of this is the AWS auth method. The EC2 instance sends cryptographically signed dynamic metadata information that uniquely represents each EC2 instance. Vault then verifies this with the AWS metadata service. For more information on AWS auth, see the AWS auth method documentation.
To programmatically manage the client authentication and client token lifecycle management, you can use Vault Agent to automate the workflow.
Refer to the following tutorials to learn more about Vault Agent Auto-Auth:
»Static secrets considerations
At this point, you should have an unsealed Vault cluster with the LDAP and AWS auth methods enabled allowing both humans and machines to authenticate to Vault, but no secrets in Vault and no policies set up to enforce access permissions of an authenticated entity. The Getting Started track provides a starting point for understanding static secrets.
Vault static secrets are laid out like a virtual filesystem. By default, Vault enables a secrets engine called
kv at the path
secret/. The KV secrets engine reads and writes data to the storage backend. Currently, when you start the Vault server in dev mode, it automatically enables v2 of the KV secrets engine at
There are several considerations when thinking about how you will lay out your static secrets “filesystem”.
First, gather all your secrets and define what you will need to access them. For example, it might be best to order human user secrets into a copy of the company's organizational structure. It might be best to order machine secrets by environment and application. You want to group secrets on paths that keep secrets you use together close to each other.
»The Principle of Least Privilege (PoLP)
The Principle of Least Privilege (PoLP) says you want to give access to secrets only when necessary, and spend some time thinking about how you lay out the secrets schema (paths) now will make the job of writing policies easier when it is time to do that.
Consider storing the highest value secrets on unique, unshared paths so that you reduce the risk of giving access to them accidentally.
»Start with one environment
If it makes sense to split your secrets by environment, then start with one and try and keep the structure under the same environment. This way, you can keep the policy structure similar in all environments.
Policy templates help enforce structure in your pathing schema and can reduce the number of policy path definitions required.
Below is an example schema for static secrets.
secret.├── machine│ ├── dev│ │ ├── DB│ │ │ ├── DB1│ │ │ └── DB2│ │ └── Apps│ │ ├── App1│ │ └── App2│ ├── test│ │ ├── DB│ │ │ ├── DB1│ │ │ └── DB2│ │ └── Apps│ │ ├── App1│ │ └── App2│ └── prod│ └── etc└── users ├── accounts │ ├── user1 │ └── user2 └── development ├── user1 └── user2
Once you have defined your secrets schema, you are ready to create the secrets.
You may find it easier to use the Vault UI if you are just starting the process of centralizing Vault's secrets. An introduction to the Vault UI is available to help new users familiarize themselves with the interface.
You can also populate the KV secrets engine programmatically through the API or the CLI. It is more efficient to do this programmatically if you have a lot of secrets and defined the structure well. The API documentation and CLI documentation provide more guidance on this process.
When you populate your static secrets for the first time, there is a security implication in that you will have some non-secret store of all your secrets for reference, and this should be closely guarded and destroyed. Once you have this data, for example, in a spreadsheet, you should consider storing it in Vault or encrypting it and storing it on your organization's file storage.
At this point you should have an unsealed Vault cluster with the LDAP and AWS auth methods enabled allowing both humans and machines to authenticate to Vault. You also have populated the default static secrets engine (
kv) with several secrets residing on well-defined paths allowing for policy to be configured to manage authorization.
Policies provide Role-Based Access Control (RBAC) of Vault secrets. When an entity authenticates to Vault it is returned a token and all policies associated with that entity are attached to that token.
As discussed earlier, Vault static secrets are laid out like a virtual filesystem and it is the role of policies to describe the permissions on these paths. For example:
- A secret is created at
- A policy named “production-db” is created that gives permission to read the secret
- The policy is mapped to the user group “marketing” in the LDAP auth method
- As a member of the “marketing” group, Bob authenticates with Vault using LDAP, and the token Bob receives has the “production-db” policy attached
- Bob is allowed to read the secret at
Keep several things in mind when writing your policies.
»Principle of Least Privilege (PoLP)
Err on the side of caution if an entity is unsure or unclear on what they require. If an entity hits the need for a secret later, that can be detected and addressed, whereas access granted to a secret that is not required is harder to find and poses a risk.
These provide structure to your pathing schema and are often useful to limit the number of policies by using variable interpolation.
Refer to the ACL Policy Path Templating tutorial for a step-by-step instruction if you are new to policy templates.
It is good practice to put your policies in version control and have a peer-review process for both security and audit purposes. Review the HashiCorp blog post on codifying Vault policies for more guidance.
Also, you can leverage Terraform to codify the configuration of Vault including policy deployment. Refer to the Codify Management of Vault Using Terraform tutorial. If you are running Vault Enterprise, the Codify Management of Vault Enterprise demonstrates the policy deployment in multiple namespaces.
»Consumption of secrets
At this point, you should have an unsealed Vault cluster with the LDAP and AWS auth methods enabled, allowing both humans and machines to authenticate to Vault. You have also populated the default static secrets engine (
kv) with several secrets residing on well-defined paths. You have policy configured to enforce permissions on which identities can access which secrets.
»Machine secret consumption
The purpose of this scenario is to deliver secrets to applications without the need to expose these secrets to your CI/CD pipeline or to have them hard-coded into your config files. Vault has first-class integration into both
consul-templateis a convenient way to populate secrets from Vault into a standard application configuration file for any application. You can run this as a single time at OS startup to build config files, or run it as a daemon that checks for updates. It can conveniently run arbitrary commands when the update process completes, allowing application re-read of config, etc.
envconsulprovides a way to populate environment variables populated by Vault values, that can then be accessed by a sub-process such as your application.
For more about using these tools, see the tutorial on direct application integration, which provides a full introduction.
Refer to the Vault Agent Templates tutorial for a step-by-step instruction if you are not familiar with Vault Agent.