»What is HashiCorp Diagnostics (hcdiag)
HashiCorp Diagnostics (hcdiag) is a universal troubleshooting data gathering tool that you can use to collect and archive important data from Vault server environments. The information gathered by hcdiag is well suited for sharing with teams during incident response and troubleshooting.
The hcdiag tool is currently available only for Vault servers operating in a Linux based environment.
This tutorial demonstrates the basic hcdiag commands, and describes the contents of files created by the tool.
»Prerequisites
To perform the steps in this tutorial, you need the following:
- Docker
- Working internet connection from Docker host
»Scenario Introduction
You will start a Vault container image on the Docker host, and access that container through a shell. From the shell, you will download the hcdiag tool and use it against the running Vault server.
You will then retrieve the file archive created by hcdiag, unpack it, and examine its contents to learn about what is gathered by default.
Some example production outputs are explored along with a deep dive explanation of the output.
You will also learn about some useful options, and how to use a configuration file with hcdiag.
»Start a Vault container
Begin by starting a dev mode Vault server container.
$ docker run \ --cap-add=IPC_LOCK \ --detach \ --name=dev-vault \ --rm \ vault server -dev -dev-root-token-id=root
$ docker run \
--cap-add=IPC_LOCK \
--detach \
--name=dev-vault \
--rm \
vault server -dev -dev-root-token-id=root
The container is named dev-vault and runs detached. When you stop the container, it will be automatically removed by Docker.
Confirm that the Vault container successfully started.
$ docker ps -f name=dev-vault --format "table {{.Names}}\t{{.Status}}" NAMES STATUS dev-vault Up 13 seconds
$ docker ps -f name=dev-vault --format "table {{.Names}}\t{{.Status}}"
NAMES STATUS
dev-vault Up 13 seconds
You will perform the majority of this scenario using commands from within the container itself. When the dev-vault container displays a status of Up, execute a shell into it.
$ docker exec -it dev-vault sh
$ docker exec -it dev-vault sh
Note that the prompt will now appear differently, like this:
/ #
/ #
»Set up the environment
Before you download and run hcdiag, you must first perform some initial environment configuration so that the tool knows how to communicate with and authenticate to Vault.
This can be done by setting environment variable values. Set the VAULT_ADDR
and VAULT_TOKEN
environment variables for use by hcdiag.
$ export VAULT_ADDR=http://127.0.0.1:8200 VAULT_TOKEN=root
$ export VAULT_ADDR=http://127.0.0.1:8200 VAULT_TOKEN=root
You should now be able to confirm that you can authenticate with Vault using the dev mode initial root token.
$ vault token lookup | grep policies policies [root]
$ vault token lookup | grep policies
policies [root]
If your output is different or you encounter an error, double-check the export of environment variables and try again.
Insecure operation: In this scenario, you use a dev mode Vault server and its initial root token. For production hcdiag use, you must use a token with sufficient privileges to execute the vault
CLI commands used by hcdiag. You can examine the output of Results.json
from an hcdiag archive to determine the commands used, and create a suitable production policy that limits a token to the required commands.
»Install necessary tools
Before you can download hcdiag, you need to install the curl and unzip tools. The Vault container is based on Alpine Linux, so you can do this with the apk
package manager.
First update package sources.
$ apk update fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz v3.13.5-280-g32fbcd8e25 [https://dl-cdn.alpinelinux.org/alpine/v3.13/main] v3.13.5-278-g37b0c46534 [https://dl-cdn.alpinelinux.org/alpine/v3.13/community] OK: 13892 distinct packages available
$ apk update
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
v3.13.5-280-g32fbcd8e25 [https://dl-cdn.alpinelinux.org/alpine/v3.13/main]
v3.13.5-278-g37b0c46534 [https://dl-cdn.alpinelinux.org/alpine/v3.13/community]
OK: 13892 distinct packages available
Then install curl and unzip.
$ apk add curl unzip (1/5) Installing brotli-libs (1.0.9-r3) (2/5) Installing nghttp2-libs (1.42.0-r1) (3/5) Installing libcurl (7.78.0-r0) (4/5) Installing curl (7.78.0-r0) (5/5) Installing unzip (6.0-r9) Executing busybox-1.32.1-r6.trigger OK: 12 MiB in 24 packages
$ apk add curl unzip
(1/5) Installing brotli-libs (1.0.9-r3)
(2/5) Installing nghttp2-libs (1.42.0-r1)
(3/5) Installing libcurl (7.78.0-r0)
(4/5) Installing curl (7.78.0-r0)
(5/5) Installing unzip (6.0-r9)
Executing busybox-1.32.1-r6.trigger
OK: 12 MiB in 24 packages
»Download hcdiag
You are now ready to download hcdiag from the HashiCorp releases repository. Recall that it is currently only available as a Linux binary but is not currently available as a Linux package (such as apt or apk).
NOTE: You can download the latest hcdiag binary from here. The hcdiag repository is now public, so see to the releases page to find out what's new.
Download the binary with curl
.
$ curl \ --silent \ --remote-name \ https://releases.hashicorp.com/hcdiag/0.1.2/hcdiag_0.1.2_linux_amd64.zip
$ curl \
--silent \
--remote-name \
https://releases.hashicorp.com/hcdiag/0.1.2/hcdiag_0.1.2_linux_amd64.zip
Unzip and remove the archive.
$ unzip hcdiag_0.1.2_linux_amd64.zip && rm -f hcdiag_0.1.2_linux_amd64.zip Archive: hcdiag_0.1.2_linux_amd64.zip inflating: linux_amd64/hcdiag
$ unzip hcdiag_0.1.2_linux_amd64.zip && rm -f hcdiag_0.1.2_linux_amd64.zip
Archive: hcdiag_0.1.2_linux_amd64.zip
inflating: linux_amd64/hcdiag
Finally add the hcdiag executable directory to the PATH
so that you can invoke the command using just its name.
$ export PATH=$PWD/linux_amd64:$PATH
$ export PATH=$PWD/linux_amd64:$PATH
»hcdiag usage
The following table lists the available command arguments.
Argument | Description | Type | Default Value |
---|---|---|---|
dryrun | Perform a dry run to display commands without executing them | bool | false |
os | Override operating system detection | string | "auto" |
consul | Run Consul diagnostics | bool | false |
nomad | Run Nomad diagnostics | bool | false |
terraform-ent | Run Terraform Enterprise/Cloud diagnostics | bool | false |
vault | Run Vault diagnostics | bool | false |
includes | files or directories to include (comma-separated, file--globbing available if 'wrapped--in-single-quotes') e.g. '/var/log/consul-,/var/log/nomad-' | string | "" |
include-since | Time range to include files, counting back from now. Takes a 'go-formatted' duration, usage examples: 72h , 25m , 45s , 120h1m90s | string | "72h" |
destination | Path to the directory the bundle should be written in | string | "." |
dest | Shorthand for -destination | string | "." |
config | Path to HCL configuration file | string | "" |
serial | Run products in sequence rather than concurrently. Mostly for dev - use only if you want to be especially delicate with system load. | bool | false |
»Run hcdiag
Use hcdiag to include all available environment information for Vault.
$ hcdiag -vault 2022-02-24T14:50:46.821Z [INFO] hcdiag: Checking product availability 2022-02-24T14:50:46.946Z [INFO] hcdiag: Gathering diagnostics 2022-02-24T14:50:46.946Z [INFO] hcdiag: Running seekers for: product=host 2022-02-24T14:50:46.946Z [INFO] hcdiag: running: seeker=stats 2022-02-24T14:50:46.946Z [INFO] hcdiag: Running seekers for: product=vault 2022-02-24T14:50:46.946Z [INFO] hcdiag: running: seeker="vault version" 2022-02-24T14:50:46.991Z [INFO] hcdiag: running: seeker="vault status -format=json" 2022-02-24T14:50:47.042Z [INFO] hcdiag: running: seeker="vault read sys/health -format=json" 2022-02-24T14:50:47.088Z [INFO] hcdiag: running: seeker="vault read sys/seal-status -format=json" 2022-02-24T14:50:47.138Z [INFO] hcdiag: running: seeker="vault read sys/host-info -format=json" 2022-02-24T14:50:47.187Z [INFO] hcdiag: running: seeker="vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s" 2022-02-24T14:50:58.275Z [INFO] hcdiag: Created Results.json file: dest=hcdiag-2022-02-24T145046Z/Results.json 2022-02-24T14:50:58.275Z [INFO] hcdiag: Created Manifest.json file: dest=hcdiag-2022-02-24T145046Z/Manifest.json 2022-02-24T14:50:58.282Z [INFO] hcdiag: Compressed and archived output file: dest=./hcdiag-2022-02-24T145046Z.tar.gz
$ hcdiag -vault
2022-02-24T14:50:46.821Z [INFO] hcdiag: Checking product availability
2022-02-24T14:50:46.946Z [INFO] hcdiag: Gathering diagnostics
2022-02-24T14:50:46.946Z [INFO] hcdiag: Running seekers for: product=host
2022-02-24T14:50:46.946Z [INFO] hcdiag: running: seeker=stats
2022-02-24T14:50:46.946Z [INFO] hcdiag: Running seekers for: product=vault
2022-02-24T14:50:46.946Z [INFO] hcdiag: running: seeker="vault version"
2022-02-24T14:50:46.991Z [INFO] hcdiag: running: seeker="vault status -format=json"
2022-02-24T14:50:47.042Z [INFO] hcdiag: running: seeker="vault read sys/health -format=json"
2022-02-24T14:50:47.088Z [INFO] hcdiag: running: seeker="vault read sys/seal-status -format=json"
2022-02-24T14:50:47.138Z [INFO] hcdiag: running: seeker="vault read sys/host-info -format=json"
2022-02-24T14:50:47.187Z [INFO] hcdiag: running: seeker="vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s"
2022-02-24T14:50:58.275Z [INFO] hcdiag: Created Results.json file: dest=hcdiag-2022-02-24T145046Z/Results.json
2022-02-24T14:50:58.275Z [INFO] hcdiag: Created Manifest.json file: dest=hcdiag-2022-02-24T145046Z/Manifest.json
2022-02-24T14:50:58.282Z [INFO] hcdiag: Compressed and archived output file: dest=./hcdiag-2022-02-24T145046Z.tar.gz
TIP: You can also invoke hcdiag without options to gather all available environment and product information.
For more details on the options accepted by hcdiag execute hcdiag -h
.
»Examine the output
What did hcdiag give us in the brief moments that it was running?
List the directory for tar+gzip archive files to discover the file that hcdiag created.
$ ls -l *.gz -rw-r--r-- 1 vault vault 139143 Aug 10 21:18 hcdiag-2022-02-15T175841Z.tar.gz
$ ls -l *.gz
-rw-r--r-- 1 vault vault 139143 Aug 10 21:18 hcdiag-2022-02-15T175841Z.tar.gz
The filename contains the hcdiag prefix, followed by a timestamp. You can unpack the archive and further examine its contents.
$ tar zxvf hcdiag-2022-02-24T145046Z.tar.gz hcdiag-2022-02-24T145046Z/Manifest.json hcdiag-2022-02-24T145046Z/Results.json hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz
$ tar zxvf hcdiag-2022-02-24T145046Z.tar.gz
hcdiag-2022-02-24T145046Z/Manifest.json
hcdiag-2022-02-24T145046Z/Results.json
hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz
Users are encouraged to inspect the bundle to ensure it contains only information that is appropriate to share based on specific use cases. The tool makes no attempts to obscure secrets or sensitive information.
Vault Enterprise users: If you are a Vault Enterprise user, you can share the output from hcdiag runs with HashiCorp Customer Success to greatly reduce the amount of information gathering needed in a support request.
The tool only works locally and does not export or share the diagnostic bundle with anyone. You must use other tools to transfer it to a secure location so you can share it with specific support staff who need to view it.
The directory hcdiag-2022-02-24T145046Z
, created by unpacking the archive contains 3 files, the contents of which are described in the following sections with examples.
»Example production outputs
A deeper dive into the outputs and their contents is provided for further clarification.
»Manifest.json
The manifest contains JSON data representing details about the hcdiag
run.
Here is an example.
{ "started_at": "2022-02-24T14:50:46.821521827Z", "ended_at": "2022-02-24T14:50:58.274716069Z", "duration": "11.453194227000001 seconds", "num_errors": 0, "num_seekers": 7, "configuration": { "host_config": null, "products_config": null, "operating_system": "auto", "serial": false, "dry_run": false, "consul_enabled": false, "nomad_enabled": false, "terraform_ent_enabled": false, "vault_enabled": true, "includes": null, "include_from": "1970-01-01T00:00:00Z", "include_to": "2022-02-24T14:50:46.821520809Z", "destination": "." } }
{
"started_at": "2022-02-24T14:50:46.821521827Z",
"ended_at": "2022-02-24T14:50:58.274716069Z",
"duration": "11.453194227000001 seconds",
"num_errors": 0,
"num_seekers": 7,
"configuration": {
"host_config": null,
"products_config": null,
"operating_system": "auto",
"serial": false,
"dry_run": false,
"consul_enabled": false,
"nomad_enabled": false,
"terraform_ent_enabled": false,
"vault_enabled": true,
"includes": null,
"include_from": "1970-01-01T00:00:00Z",
"include_to": "2022-02-24T14:50:46.821520809Z",
"destination": "."
}
}
From this output, you can learn things like which products are included in the output, the duration of the run, and whether or not any errors were encountered.
»Results.json
The results file contains detailed information about the host and Vault environment. It is a large amount of output best parsed and queried with a tool like jq
for specific answers.
{ "host": { "stats": { "runner": { "os": "linux" }, "result": { "disk": [ ...snip... ], "host": { ...snip... }, "memory": { ...snip... }, "uname": "#1 SMP Mon Nov 8 10:21:19 UTC 2021" }, "error": "" } }, "vault": { "vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s": { "runner": { "command": "vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s" }, ...snip... }, "vault read sys/seal-status -format=json": { "runner": { "command": "vault read sys/seal-status -format=json" }, "result": { ...snip... }, "error": "" }, "vault status -format=json": { "runner": { "command": "vault status -format=json" }, ...snip... }, "vault version": { "runner": { "command": "vault version" }, "result": "Vault v1.9.3 (7dbdd57243a0d8d9d9e07cd01eb657369f8e1b8a)", "error": "" } } }
{
"host": {
"stats": {
"runner": {
"os": "linux"
},
"result": {
"disk": [
...snip...
],
"host": {
...snip...
},
"memory": {
...snip...
},
"uname": "#1 SMP Mon Nov 8 10:21:19 UTC 2021"
},
"error": ""
}
},
"vault": {
"vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s": {
"runner": {
"command": "vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s"
},
...snip...
},
"vault read sys/seal-status -format=json": {
"runner": {
"command": "vault read sys/seal-status -format=json"
},
"result": {
...snip...
},
"error": ""
},
"vault status -format=json": {
"runner": {
"command": "vault status -format=json"
},
...snip...
},
"vault version": {
"runner": {
"command": "vault version"
},
"result": "Vault v1.9.3 (7dbdd57243a0d8d9d9e07cd01eb657369f8e1b8a)",
"error": ""
}
}
}
{ "host": { "stats": { "runner": { "os": "linux" }, "result": { "disk": [ { "device": "overlay", "mountpoint": "/", "fstype": "overlay", "opts": [ "rw", "relatime" ] }, { "device": "proc", "mountpoint": "/proc", "fstype": "proc", "opts": [ "rw", "nosuid", "nodev", "noexec", "relatime" ] }, { "device": "tmpfs", "mountpoint": "/dev", "fstype": "tmpfs", "opts": [ "rw", "nosuid" ] }, { "device": "devpts", "mountpoint": "/dev/pts", "fstype": "devpts", "opts": [ "rw", "nosuid", "noexec", "relatime" ] }, { "device": "sysfs", "mountpoint": "/sys", "fstype": "sysfs", "opts": [ "ro", "nosuid", "nodev", "noexec", "relatime" ] }, { "device": "cgroup", "mountpoint": "/sys/fs/cgroup", "fstype": "cgroup2", "opts": [ "ro", "nosuid", "nodev", "noexec", "relatime" ] }, { "device": "mqueue", "mountpoint": "/dev/mqueue", "fstype": "mqueue", "opts": [ "rw", "nosuid", "nodev", "noexec", "relatime" ] }, { "device": "shm", "mountpoint": "/dev/shm", "fstype": "tmpfs", "opts": [ "rw", "nosuid", "nodev", "noexec", "relatime" ] }, { "device": "/dev/vda1", "mountpoint": "/vault/logs", "fstype": "ext4", "opts": [ "rw", "relatime", "bind" ] }, { "device": "/dev/vda1", "mountpoint": "/vault/file", "fstype": "ext4", "opts": [ "rw", "relatime", "bind" ] }, { "device": "/dev/vda1", "mountpoint": "/etc/resolv.conf", "fstype": "ext4", "opts": [ "rw", "relatime", "bind" ] }, { "device": "/dev/vda1", "mountpoint": "/etc/hostname", "fstype": "ext4", "opts": [ "rw", "relatime", "bind" ] }, { "device": "/dev/vda1", "mountpoint": "/etc/hosts", "fstype": "ext4", "opts": [ "rw", "relatime", "bind" ] }, { "device": "proc", "mountpoint": "/proc/bus", "fstype": "proc", "opts": [ "ro", "nosuid", "nodev", "noexec", "relatime", "bind" ] }, { "device": "proc", "mountpoint": "/proc/fs", "fstype": "proc", "opts": [ "ro", "nosuid", "nodev", "noexec", "relatime", "bind" ] }, { "device": "proc", "mountpoint": "/proc/irq", "fstype": "proc", "opts": [ "ro", "nosuid", "nodev", "noexec", "relatime", "bind" ] }, { "device": "proc", "mountpoint": "/proc/sys", "fstype": "proc", "opts": [ "ro", "nosuid", "nodev", "noexec", "relatime", "bind" ] }, { "device": "proc", "mountpoint": "/proc/sysrq-trigger", "fstype": "proc", "opts": [ "ro", "nosuid", "nodev", "noexec", "relatime", "bind" ] }, { "device": "tmpfs", "mountpoint": "/proc/acpi", "fstype": "tmpfs", "opts": [ "ro", "relatime" ] }, { "device": "tmpfs", "mountpoint": "/proc/kcore", "fstype": "tmpfs", "opts": [ "rw", "nosuid", "bind" ] }, { "device": "tmpfs", "mountpoint": "/proc/keys", "fstype": "tmpfs", "opts": [ "rw", "nosuid", "bind" ] }, { "device": "tmpfs", "mountpoint": "/proc/timer_list", "fstype": "tmpfs", "opts": [ "rw", "nosuid", "bind" ] }, { "device": "tmpfs", "mountpoint": "/proc/sched_debug", "fstype": "tmpfs", "opts": [ "rw", "nosuid", "bind" ] }, { "device": "tmpfs", "mountpoint": "/sys/firmware", "fstype": "tmpfs", "opts": [ "ro", "relatime" ] } ], "host": { "hostname": "fac9458dceba", "uptime": 181, "bootTime": 1645714066, "procs": 5, "os": "linux", "platform": "alpine", "platformFamily": "alpine", "platformVersion": "3.14.3", "kernelVersion": "5.10.76-linuxkit", "kernelArch": "x86_64", "virtualizationSystem": "", "virtualizationRole": "guest", "hostId": "a4cf4b9a-0000-0000-802b-63c077dfd558" }, "memory": { "total": 5704634368, "available": 4859256832, "used": 269352960, "usedPercent": 4.7216516015632575, "free": 4411375616, "active": 240119808, "inactive": 937185280, "wired": 0, "laundry": 0, "buffers": 7458816, "cached": 1016446976, "writeBack": 0, "dirty": 12288, "writeBackTmp": 0, "shared": 343969792, "slab": 59740160, "sreclaimable": 31637504, "sunreclaim": 28102656, "pageTables": 4677632, "swapCached": 0, "commitLimit": 3926052864, "committedAS": 3108286464, "highTotal": 0, "highFree": 0, "lowTotal": 0, "lowFree": 0, "swapTotal": 1073737728, "swapFree": 1073737728, "mapped": 304005120, "vmallocTotal": 35184372087808, "vmallocUsed": 11104256, "vmallocChunk": 0, "hugePagesTotal": 0, "hugePagesFree": 0, "hugePageSize": 2097152 }, "uname": "#1 SMP Mon Nov 8 10:21:19 UTC 2021" }, "error": "" } }, "vault": { "vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s": { "runner": { "command": "vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s" }, "result": "Overwriting interval value \"30s\" to the duration value \"10s\"\n==\u003e Starting debug capture...\n Vault Address: http://localhost:8200\n Client Version: 1.9.3\n Duration: 10s\n Interval: 10s\n Metrics Interval: 10s\n Targets: config, host, metrics, pprof, replication-status, server-status, log\n Output: hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz\n\n==\u003e Capturing static information...\n2022-02-24T14:50:47.235Z [INFO] capturing configuration state\n\n==\u003e Capturing dynamic information...\n2022-02-24T14:50:47.236Z [INFO] capturing metrics: count=0\n2022-02-24T14:50:47.236Z [INFO] capturing replication status: count=0\n2022-02-24T14:50:47.236Z [INFO] capturing host information: count=0\n2022-02-24T14:50:47.237Z [INFO] capturing pprof data: count=0\n2022-02-24T14:50:47.237Z [INFO] capturing server status: count=0\n2022-02-24T14:50:57.242Z [INFO] capturing host information: count=1\n2022-02-24T14:50:57.242Z [INFO] capturing server status: count=1\n2022-02-24T14:50:57.242Z [INFO] capturing metrics: count=1\n2022-02-24T14:50:57.242Z [INFO] capturing replication status: count=1\n2022-02-24T14:50:57.351Z [INFO] capturing pprof data: count=1\nFinished capturing information, bundling files...\nSuccess! Bundle written to: hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz", "error": "" }, "vault read sys/health -format=json": { "runner": { "command": "vault read sys/health -format=json" }, "result": { "data": null, "lease_duration": 0, "lease_id": "", "renewable": false, "request_id": "", "warnings": null }, "error": "" }, "vault read sys/host-info -format=json": { "runner": { "command": "vault read sys/host-info -format=json" }, "result": { "data": { "cpu": [ { "cacheSize": 8192, "coreId": "0", "cores": 1, "cpu": 0, "family": "6", "flags": [ "fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce", "cx8", "apic", "sep", "mtrr", "pge", "mca", "cmov", "pat", "pse36", "clflush", "mmx", "fxsr", "sse", "sse2", "ss", "ht", "pbe", "syscall", "nx", "pdpe1gb", "lm", "constant_tsc", "rep_good", "nopl", "xtopology", "nonstop_tsc", "cpuid", "pni", "pclmulqdq", "dtes64", "ds_cpl", "ssse3", "sdbg", "fma", "cx16", "xtpr", "pcid", "sse4_1", "sse4_2", "movbe", "popcnt", "aes", "xsave", "avx", "f16c", "rdrand", "hypervisor", "lahf_lm", "abm", "3dnowprefetch", "fsgsbase", "bmi1", "avx2", "bmi2", "erms", "avx512f", "avx512cd", "xsaveopt", "arat" ], "mhz": 2300, "microcode": "", "model": "126", "modelName": "Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz", "physicalId": "0", "stepping": 5, "vendorId": "GenuineIntel" }, { "cacheSize": 8192, "coreId": "0", "cores": 1, "cpu": 1, "family": "6", "flags": [ "fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce", "cx8", "apic", "sep", "mtrr", "pge", "mca", "cmov", "pat", "pse36", "clflush", "mmx", "fxsr", "sse", "sse2", "ss", "ht", "pbe", "syscall", "nx", "pdpe1gb", "lm", "constant_tsc", "rep_good", "nopl", "xtopology", "nonstop_tsc", "cpuid", "pni", "pclmulqdq", "dtes64", "ds_cpl", "ssse3", "sdbg", "fma", "cx16", "xtpr", "pcid", "sse4_1", "sse4_2", "movbe", "popcnt", "aes", "xsave", "avx", "f16c", "rdrand", "hypervisor", "lahf_lm", "abm", "3dnowprefetch", "fsgsbase", "bmi1", "avx2", "bmi2", "erms", "avx512f", "avx512cd", "xsaveopt", "arat" ], "mhz": 2300, "microcode": "", "model": "126", "modelName": "Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz", "physicalId": "1", "stepping": 5, "vendorId": "GenuineIntel" }, { "cacheSize": 8192, "coreId": "0", "cores": 1, "cpu": 2, "family": "6", "flags": [ "fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce", "cx8", "apic", "sep", "mtrr", "pge", "mca", "cmov", "pat", "pse36", "clflush", "mmx", "fxsr", "sse", "sse2", "ss", "ht", "pbe", "syscall", "nx", "pdpe1gb", "lm", "constant_tsc", "rep_good", "nopl", "xtopology", "nonstop_tsc", "cpuid", "pni", "pclmulqdq", "dtes64", "ds_cpl", "ssse3", "sdbg", "fma", "cx16", "xtpr", "pcid", "sse4_1", "sse4_2", "movbe", "popcnt", "aes", "xsave", "avx", "f16c", "rdrand", "hypervisor", "lahf_lm", "abm", "3dnowprefetch", "fsgsbase", "bmi1", "avx2", "bmi2", "erms", "avx512f", "avx512cd", "xsaveopt", "arat" ], "mhz": 2300, "microcode": "", "model": "126", "modelName": "Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz", "physicalId": "2", "stepping": 5, "vendorId": "GenuineIntel" }, { "cacheSize": 8192, "coreId": "0", "cores": 1, "cpu": 3, "family": "6", "flags": [ "fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce", "cx8", "apic", "sep", "mtrr", "pge", "mca", "cmov", "pat", "pse36", "clflush", "mmx", "fxsr", "sse", "sse2", "ss", "ht", "pbe", "syscall", "nx", "pdpe1gb", "lm", "constant_tsc", "rep_good", "nopl", "xtopology", "nonstop_tsc", "cpuid", "pni", "pclmulqdq", "dtes64", "ds_cpl", "ssse3", "sdbg", "fma", "cx16", "xtpr", "pcid", "sse4_1", "sse4_2", "movbe", "popcnt", "aes", "xsave", "avx", "f16c", "rdrand", "hypervisor", "lahf_lm", "abm", "3dnowprefetch", "fsgsbase", "bmi1", "avx2", "bmi2", "erms", "avx512f", "avx512cd", "xsaveopt", "arat" ], "mhz": 2300, "microcode": "", "model": "126", "modelName": "Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz", "physicalId": "3", "stepping": 5, "vendorId": "GenuineIntel" } ], "cpu_times": [ { "cpu": "cpu0", "guest": 0, "guestNice": 0, "idle": 162.41, "iowait": 0.6, "irq": 0, "nice": 0, "softirq": 0.03, "steal": 0, "system": 3.43, "user": 0.77 }, { "cpu": "cpu1", "guest": 0, "guestNice": 0, "idle": 163.69, "iowait": 0.28, "irq": 0, "nice": 0, "softirq": 0.02, "steal": 0, "system": 1.17, "user": 1.12 }, { "cpu": "cpu2", "guest": 0, "guestNice": 0, "idle": 162.56, "iowait": 0.32, "irq": 0, "nice": 0, "softirq": 0.05, "steal": 0, "system": 1.79, "user": 1.07 }, { "cpu": "cpu3", "guest": 0, "guestNice": 0, "idle": 161.98, "iowait": 1.1, "irq": 0, "nice": 0, "softirq": 0.08, "steal": 0, "system": 1.68, "user": 0.99 } ], "disk": [ { "free": 57431232512, "fstype": "ext2/ext3", "inodesFree": 3894488, "inodesTotal": 3907584, "inodesUsed": 13096, "inodesUsedPercent": 0.3351431472746331, "path": "/vault/logs", "total": 62725623808, "used": 2077675520, "usedPercent": 3.4913689205702814 }, { "free": 57431232512, "fstype": "ext2/ext3", "inodesFree": 3894488, "inodesTotal": 3907584, "inodesUsed": 13096, "inodesUsedPercent": 0.3351431472746331, "path": "/vault/file", "total": 62725623808, "used": 2077675520, "usedPercent": 3.4913689205702814 }, { "free": 57431232512, "fstype": "ext2/ext3", "inodesFree": 3894488, "inodesTotal": 3907584, "inodesUsed": 13096, "inodesUsedPercent": 0.3351431472746331, "path": "/etc/resolv.conf", "total": 62725623808, "used": 2077675520, "usedPercent": 3.4913689205702814 }, { "free": 57431232512, "fstype": "ext2/ext3", "inodesFree": 3894488, "inodesTotal": 3907584, "inodesUsed": 13096, "inodesUsedPercent": 0.3351431472746331, "path": "/etc/hostname", "total": 62725623808, "used": 2077675520, "usedPercent": 3.4913689205702814 }, { "free": 57431232512, "fstype": "ext2/ext3", "inodesFree": 3894488, "inodesTotal": 3907584, "inodesUsed": 13096, "inodesUsedPercent": 0.3351431472746331, "path": "/etc/hosts", "total": 62725623808, "used": 2077675520, "usedPercent": 3.4913689205702814 } ], "host": { "bootTime": 1645714066, "hostid": "6171926a-e406-4516-93eb-6352a38169cb", "hostname": "fac9458dceba", "kernelArch": "x86_64", "kernelVersion": "5.10.76-linuxkit", "os": "linux", "platform": "alpine", "platformFamily": "alpine", "platformVersion": "3.14.3", "procs": 5, "uptime": 181, "virtualizationRole": "guest", "virtualizationSystem": "" }, "memory": { "active": 241491968, "available": 4844466176, "buffers": 7458816, "cached": 1016471552, "commitlimit": 3926052864, "committedas": 3125587968, "dirty": 12288, "free": 4396568576, "highfree": 0, "hightotal": 0, "hugepagesfree": 0, "hugepagesize": 2097152, "hugepagestotal": 0, "inactive": 951267328, "laundry": 0, "lowfree": 0, "lowtotal": 0, "mapped": 305569792, "pagetables": 5148672, "shared": 343969792, "slab": 59760640, "sreclaimable": 31653888, "sunreclaim": 28106752, "swapcached": 0, "swapfree": 1073737728, "swaptotal": 1073737728, "total": 5704634368, "used": 284135424, "usedPercent": 4.980782389732993, "vmallocchunk": 0, "vmalloctotal": 35184372087808, "vmallocused": 11153408, "wired": 0, "writeback": 0, "writebacktmp": 0 }, "timestamp": "2022-02-24T14:50:47.182059169Z" }, "lease_duration": 0, "lease_id": "", "renewable": false, "request_id": "4c50da3b-eb37-9892-46a9-929c7ff087c7", "warnings": null }, "error": "" }, "vault read sys/seal-status -format=json": { "runner": { "command": "vault read sys/seal-status -format=json" }, "result": { "data": null, "lease_duration": 0, "lease_id": "", "renewable": false, "request_id": "", "warnings": null }, "error": "" }, "vault status -format=json": { "runner": { "command": "vault status -format=json" }, "result": { "active_time": "0001-01-01T00:00:00Z", "cluster_id": "f09dd249-6efa-09c2-637d-861694314024", "cluster_name": "vault-cluster-600609bf", "ha_enabled": false, "initialized": true, "migration": false, "n": 1, "nonce": "", "progress": 0, "recovery_seal": false, "sealed": false, "storage_type": "inmem", "t": 1, "type": "shamir", "version": "1.9.3" }, "error": "" }, "vault version": { "runner": { "command": "vault version" }, "result": "Vault v1.9.3 (7dbdd57243a0d8d9d9e07cd01eb657369f8e1b8a)", "error": "" } } }
{
"host": {
"stats": {
"runner": {
"os": "linux"
},
"result": {
"disk": [
{
"device": "overlay",
"mountpoint": "/",
"fstype": "overlay",
"opts": [
"rw",
"relatime"
]
},
{
"device": "proc",
"mountpoint": "/proc",
"fstype": "proc",
"opts": [
"rw",
"nosuid",
"nodev",
"noexec",
"relatime"
]
},
{
"device": "tmpfs",
"mountpoint": "/dev",
"fstype": "tmpfs",
"opts": [
"rw",
"nosuid"
]
},
{
"device": "devpts",
"mountpoint": "/dev/pts",
"fstype": "devpts",
"opts": [
"rw",
"nosuid",
"noexec",
"relatime"
]
},
{
"device": "sysfs",
"mountpoint": "/sys",
"fstype": "sysfs",
"opts": [
"ro",
"nosuid",
"nodev",
"noexec",
"relatime"
]
},
{
"device": "cgroup",
"mountpoint": "/sys/fs/cgroup",
"fstype": "cgroup2",
"opts": [
"ro",
"nosuid",
"nodev",
"noexec",
"relatime"
]
},
{
"device": "mqueue",
"mountpoint": "/dev/mqueue",
"fstype": "mqueue",
"opts": [
"rw",
"nosuid",
"nodev",
"noexec",
"relatime"
]
},
{
"device": "shm",
"mountpoint": "/dev/shm",
"fstype": "tmpfs",
"opts": [
"rw",
"nosuid",
"nodev",
"noexec",
"relatime"
]
},
{
"device": "/dev/vda1",
"mountpoint": "/vault/logs",
"fstype": "ext4",
"opts": [
"rw",
"relatime",
"bind"
]
},
{
"device": "/dev/vda1",
"mountpoint": "/vault/file",
"fstype": "ext4",
"opts": [
"rw",
"relatime",
"bind"
]
},
{
"device": "/dev/vda1",
"mountpoint": "/etc/resolv.conf",
"fstype": "ext4",
"opts": [
"rw",
"relatime",
"bind"
]
},
{
"device": "/dev/vda1",
"mountpoint": "/etc/hostname",
"fstype": "ext4",
"opts": [
"rw",
"relatime",
"bind"
]
},
{
"device": "/dev/vda1",
"mountpoint": "/etc/hosts",
"fstype": "ext4",
"opts": [
"rw",
"relatime",
"bind"
]
},
{
"device": "proc",
"mountpoint": "/proc/bus",
"fstype": "proc",
"opts": [
"ro",
"nosuid",
"nodev",
"noexec",
"relatime",
"bind"
]
},
{
"device": "proc",
"mountpoint": "/proc/fs",
"fstype": "proc",
"opts": [
"ro",
"nosuid",
"nodev",
"noexec",
"relatime",
"bind"
]
},
{
"device": "proc",
"mountpoint": "/proc/irq",
"fstype": "proc",
"opts": [
"ro",
"nosuid",
"nodev",
"noexec",
"relatime",
"bind"
]
},
{
"device": "proc",
"mountpoint": "/proc/sys",
"fstype": "proc",
"opts": [
"ro",
"nosuid",
"nodev",
"noexec",
"relatime",
"bind"
]
},
{
"device": "proc",
"mountpoint": "/proc/sysrq-trigger",
"fstype": "proc",
"opts": [
"ro",
"nosuid",
"nodev",
"noexec",
"relatime",
"bind"
]
},
{
"device": "tmpfs",
"mountpoint": "/proc/acpi",
"fstype": "tmpfs",
"opts": [
"ro",
"relatime"
]
},
{
"device": "tmpfs",
"mountpoint": "/proc/kcore",
"fstype": "tmpfs",
"opts": [
"rw",
"nosuid",
"bind"
]
},
{
"device": "tmpfs",
"mountpoint": "/proc/keys",
"fstype": "tmpfs",
"opts": [
"rw",
"nosuid",
"bind"
]
},
{
"device": "tmpfs",
"mountpoint": "/proc/timer_list",
"fstype": "tmpfs",
"opts": [
"rw",
"nosuid",
"bind"
]
},
{
"device": "tmpfs",
"mountpoint": "/proc/sched_debug",
"fstype": "tmpfs",
"opts": [
"rw",
"nosuid",
"bind"
]
},
{
"device": "tmpfs",
"mountpoint": "/sys/firmware",
"fstype": "tmpfs",
"opts": [
"ro",
"relatime"
]
}
],
"host": {
"hostname": "fac9458dceba",
"uptime": 181,
"bootTime": 1645714066,
"procs": 5,
"os": "linux",
"platform": "alpine",
"platformFamily": "alpine",
"platformVersion": "3.14.3",
"kernelVersion": "5.10.76-linuxkit",
"kernelArch": "x86_64",
"virtualizationSystem": "",
"virtualizationRole": "guest",
"hostId": "a4cf4b9a-0000-0000-802b-63c077dfd558"
},
"memory": {
"total": 5704634368,
"available": 4859256832,
"used": 269352960,
"usedPercent": 4.7216516015632575,
"free": 4411375616,
"active": 240119808,
"inactive": 937185280,
"wired": 0,
"laundry": 0,
"buffers": 7458816,
"cached": 1016446976,
"writeBack": 0,
"dirty": 12288,
"writeBackTmp": 0,
"shared": 343969792,
"slab": 59740160,
"sreclaimable": 31637504,
"sunreclaim": 28102656,
"pageTables": 4677632,
"swapCached": 0,
"commitLimit": 3926052864,
"committedAS": 3108286464,
"highTotal": 0,
"highFree": 0,
"lowTotal": 0,
"lowFree": 0,
"swapTotal": 1073737728,
"swapFree": 1073737728,
"mapped": 304005120,
"vmallocTotal": 35184372087808,
"vmallocUsed": 11104256,
"vmallocChunk": 0,
"hugePagesTotal": 0,
"hugePagesFree": 0,
"hugePageSize": 2097152
},
"uname": "#1 SMP Mon Nov 8 10:21:19 UTC 2021"
},
"error": ""
}
},
"vault": {
"vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s": {
"runner": {
"command": "vault debug -output=hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz -duration=10s"
},
"result": "Overwriting interval value \"30s\" to the duration value \"10s\"\n==\u003e Starting debug capture...\n Vault Address: http://localhost:8200\n Client Version: 1.9.3\n Duration: 10s\n Interval: 10s\n Metrics Interval: 10s\n Targets: config, host, metrics, pprof, replication-status, server-status, log\n Output: hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz\n\n==\u003e Capturing static information...\n2022-02-24T14:50:47.235Z [INFO] capturing configuration state\n\n==\u003e Capturing dynamic information...\n2022-02-24T14:50:47.236Z [INFO] capturing metrics: count=0\n2022-02-24T14:50:47.236Z [INFO] capturing replication status: count=0\n2022-02-24T14:50:47.236Z [INFO] capturing host information: count=0\n2022-02-24T14:50:47.237Z [INFO] capturing pprof data: count=0\n2022-02-24T14:50:47.237Z [INFO] capturing server status: count=0\n2022-02-24T14:50:57.242Z [INFO] capturing host information: count=1\n2022-02-24T14:50:57.242Z [INFO] capturing server status: count=1\n2022-02-24T14:50:57.242Z [INFO] capturing metrics: count=1\n2022-02-24T14:50:57.242Z [INFO] capturing replication status: count=1\n2022-02-24T14:50:57.351Z [INFO] capturing pprof data: count=1\nFinished capturing information, bundling files...\nSuccess! Bundle written to: hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz",
"error": ""
},
"vault read sys/health -format=json": {
"runner": {
"command": "vault read sys/health -format=json"
},
"result": {
"data": null,
"lease_duration": 0,
"lease_id": "",
"renewable": false,
"request_id": "",
"warnings": null
},
"error": ""
},
"vault read sys/host-info -format=json": {
"runner": {
"command": "vault read sys/host-info -format=json"
},
"result": {
"data": {
"cpu": [
{
"cacheSize": 8192,
"coreId": "0",
"cores": 1,
"cpu": 0,
"family": "6",
"flags": [
"fpu",
"vme",
"de",
"pse",
"tsc",
"msr",
"pae",
"mce",
"cx8",
"apic",
"sep",
"mtrr",
"pge",
"mca",
"cmov",
"pat",
"pse36",
"clflush",
"mmx",
"fxsr",
"sse",
"sse2",
"ss",
"ht",
"pbe",
"syscall",
"nx",
"pdpe1gb",
"lm",
"constant_tsc",
"rep_good",
"nopl",
"xtopology",
"nonstop_tsc",
"cpuid",
"pni",
"pclmulqdq",
"dtes64",
"ds_cpl",
"ssse3",
"sdbg",
"fma",
"cx16",
"xtpr",
"pcid",
"sse4_1",
"sse4_2",
"movbe",
"popcnt",
"aes",
"xsave",
"avx",
"f16c",
"rdrand",
"hypervisor",
"lahf_lm",
"abm",
"3dnowprefetch",
"fsgsbase",
"bmi1",
"avx2",
"bmi2",
"erms",
"avx512f",
"avx512cd",
"xsaveopt",
"arat"
],
"mhz": 2300,
"microcode": "",
"model": "126",
"modelName": "Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz",
"physicalId": "0",
"stepping": 5,
"vendorId": "GenuineIntel"
},
{
"cacheSize": 8192,
"coreId": "0",
"cores": 1,
"cpu": 1,
"family": "6",
"flags": [
"fpu",
"vme",
"de",
"pse",
"tsc",
"msr",
"pae",
"mce",
"cx8",
"apic",
"sep",
"mtrr",
"pge",
"mca",
"cmov",
"pat",
"pse36",
"clflush",
"mmx",
"fxsr",
"sse",
"sse2",
"ss",
"ht",
"pbe",
"syscall",
"nx",
"pdpe1gb",
"lm",
"constant_tsc",
"rep_good",
"nopl",
"xtopology",
"nonstop_tsc",
"cpuid",
"pni",
"pclmulqdq",
"dtes64",
"ds_cpl",
"ssse3",
"sdbg",
"fma",
"cx16",
"xtpr",
"pcid",
"sse4_1",
"sse4_2",
"movbe",
"popcnt",
"aes",
"xsave",
"avx",
"f16c",
"rdrand",
"hypervisor",
"lahf_lm",
"abm",
"3dnowprefetch",
"fsgsbase",
"bmi1",
"avx2",
"bmi2",
"erms",
"avx512f",
"avx512cd",
"xsaveopt",
"arat"
],
"mhz": 2300,
"microcode": "",
"model": "126",
"modelName": "Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz",
"physicalId": "1",
"stepping": 5,
"vendorId": "GenuineIntel"
},
{
"cacheSize": 8192,
"coreId": "0",
"cores": 1,
"cpu": 2,
"family": "6",
"flags": [
"fpu",
"vme",
"de",
"pse",
"tsc",
"msr",
"pae",
"mce",
"cx8",
"apic",
"sep",
"mtrr",
"pge",
"mca",
"cmov",
"pat",
"pse36",
"clflush",
"mmx",
"fxsr",
"sse",
"sse2",
"ss",
"ht",
"pbe",
"syscall",
"nx",
"pdpe1gb",
"lm",
"constant_tsc",
"rep_good",
"nopl",
"xtopology",
"nonstop_tsc",
"cpuid",
"pni",
"pclmulqdq",
"dtes64",
"ds_cpl",
"ssse3",
"sdbg",
"fma",
"cx16",
"xtpr",
"pcid",
"sse4_1",
"sse4_2",
"movbe",
"popcnt",
"aes",
"xsave",
"avx",
"f16c",
"rdrand",
"hypervisor",
"lahf_lm",
"abm",
"3dnowprefetch",
"fsgsbase",
"bmi1",
"avx2",
"bmi2",
"erms",
"avx512f",
"avx512cd",
"xsaveopt",
"arat"
],
"mhz": 2300,
"microcode": "",
"model": "126",
"modelName": "Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz",
"physicalId": "2",
"stepping": 5,
"vendorId": "GenuineIntel"
},
{
"cacheSize": 8192,
"coreId": "0",
"cores": 1,
"cpu": 3,
"family": "6",
"flags": [
"fpu",
"vme",
"de",
"pse",
"tsc",
"msr",
"pae",
"mce",
"cx8",
"apic",
"sep",
"mtrr",
"pge",
"mca",
"cmov",
"pat",
"pse36",
"clflush",
"mmx",
"fxsr",
"sse",
"sse2",
"ss",
"ht",
"pbe",
"syscall",
"nx",
"pdpe1gb",
"lm",
"constant_tsc",
"rep_good",
"nopl",
"xtopology",
"nonstop_tsc",
"cpuid",
"pni",
"pclmulqdq",
"dtes64",
"ds_cpl",
"ssse3",
"sdbg",
"fma",
"cx16",
"xtpr",
"pcid",
"sse4_1",
"sse4_2",
"movbe",
"popcnt",
"aes",
"xsave",
"avx",
"f16c",
"rdrand",
"hypervisor",
"lahf_lm",
"abm",
"3dnowprefetch",
"fsgsbase",
"bmi1",
"avx2",
"bmi2",
"erms",
"avx512f",
"avx512cd",
"xsaveopt",
"arat"
],
"mhz": 2300,
"microcode": "",
"model": "126",
"modelName": "Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz",
"physicalId": "3",
"stepping": 5,
"vendorId": "GenuineIntel"
}
],
"cpu_times": [
{
"cpu": "cpu0",
"guest": 0,
"guestNice": 0,
"idle": 162.41,
"iowait": 0.6,
"irq": 0,
"nice": 0,
"softirq": 0.03,
"steal": 0,
"system": 3.43,
"user": 0.77
},
{
"cpu": "cpu1",
"guest": 0,
"guestNice": 0,
"idle": 163.69,
"iowait": 0.28,
"irq": 0,
"nice": 0,
"softirq": 0.02,
"steal": 0,
"system": 1.17,
"user": 1.12
},
{
"cpu": "cpu2",
"guest": 0,
"guestNice": 0,
"idle": 162.56,
"iowait": 0.32,
"irq": 0,
"nice": 0,
"softirq": 0.05,
"steal": 0,
"system": 1.79,
"user": 1.07
},
{
"cpu": "cpu3",
"guest": 0,
"guestNice": 0,
"idle": 161.98,
"iowait": 1.1,
"irq": 0,
"nice": 0,
"softirq": 0.08,
"steal": 0,
"system": 1.68,
"user": 0.99
}
],
"disk": [
{
"free": 57431232512,
"fstype": "ext2/ext3",
"inodesFree": 3894488,
"inodesTotal": 3907584,
"inodesUsed": 13096,
"inodesUsedPercent": 0.3351431472746331,
"path": "/vault/logs",
"total": 62725623808,
"used": 2077675520,
"usedPercent": 3.4913689205702814
},
{
"free": 57431232512,
"fstype": "ext2/ext3",
"inodesFree": 3894488,
"inodesTotal": 3907584,
"inodesUsed": 13096,
"inodesUsedPercent": 0.3351431472746331,
"path": "/vault/file",
"total": 62725623808,
"used": 2077675520,
"usedPercent": 3.4913689205702814
},
{
"free": 57431232512,
"fstype": "ext2/ext3",
"inodesFree": 3894488,
"inodesTotal": 3907584,
"inodesUsed": 13096,
"inodesUsedPercent": 0.3351431472746331,
"path": "/etc/resolv.conf",
"total": 62725623808,
"used": 2077675520,
"usedPercent": 3.4913689205702814
},
{
"free": 57431232512,
"fstype": "ext2/ext3",
"inodesFree": 3894488,
"inodesTotal": 3907584,
"inodesUsed": 13096,
"inodesUsedPercent": 0.3351431472746331,
"path": "/etc/hostname",
"total": 62725623808,
"used": 2077675520,
"usedPercent": 3.4913689205702814
},
{
"free": 57431232512,
"fstype": "ext2/ext3",
"inodesFree": 3894488,
"inodesTotal": 3907584,
"inodesUsed": 13096,
"inodesUsedPercent": 0.3351431472746331,
"path": "/etc/hosts",
"total": 62725623808,
"used": 2077675520,
"usedPercent": 3.4913689205702814
}
],
"host": {
"bootTime": 1645714066,
"hostid": "6171926a-e406-4516-93eb-6352a38169cb",
"hostname": "fac9458dceba",
"kernelArch": "x86_64",
"kernelVersion": "5.10.76-linuxkit",
"os": "linux",
"platform": "alpine",
"platformFamily": "alpine",
"platformVersion": "3.14.3",
"procs": 5,
"uptime": 181,
"virtualizationRole": "guest",
"virtualizationSystem": ""
},
"memory": {
"active": 241491968,
"available": 4844466176,
"buffers": 7458816,
"cached": 1016471552,
"commitlimit": 3926052864,
"committedas": 3125587968,
"dirty": 12288,
"free": 4396568576,
"highfree": 0,
"hightotal": 0,
"hugepagesfree": 0,
"hugepagesize": 2097152,
"hugepagestotal": 0,
"inactive": 951267328,
"laundry": 0,
"lowfree": 0,
"lowtotal": 0,
"mapped": 305569792,
"pagetables": 5148672,
"shared": 343969792,
"slab": 59760640,
"sreclaimable": 31653888,
"sunreclaim": 28106752,
"swapcached": 0,
"swapfree": 1073737728,
"swaptotal": 1073737728,
"total": 5704634368,
"used": 284135424,
"usedPercent": 4.980782389732993,
"vmallocchunk": 0,
"vmalloctotal": 35184372087808,
"vmallocused": 11153408,
"wired": 0,
"writeback": 0,
"writebacktmp": 0
},
"timestamp": "2022-02-24T14:50:47.182059169Z"
},
"lease_duration": 0,
"lease_id": "",
"renewable": false,
"request_id": "4c50da3b-eb37-9892-46a9-929c7ff087c7",
"warnings": null
},
"error": ""
},
"vault read sys/seal-status -format=json": {
"runner": {
"command": "vault read sys/seal-status -format=json"
},
"result": {
"data": null,
"lease_duration": 0,
"lease_id": "",
"renewable": false,
"request_id": "",
"warnings": null
},
"error": ""
},
"vault status -format=json": {
"runner": {
"command": "vault status -format=json"
},
"result": {
"active_time": "0001-01-01T00:00:00Z",
"cluster_id": "f09dd249-6efa-09c2-637d-861694314024",
"cluster_name": "vault-cluster-600609bf",
"ha_enabled": false,
"initialized": true,
"migration": false,
"n": 1,
"nonce": "",
"progress": 0,
"recovery_seal": false,
"sealed": false,
"storage_type": "inmem",
"t": 1,
"type": "shamir",
"version": "1.9.3"
},
"error": ""
},
"vault version": {
"runner": {
"command": "vault version"
},
"result": "Vault v1.9.3 (7dbdd57243a0d8d9d9e07cd01eb657369f8e1b8a)",
"error": ""
}
}
}
»VaultDebug.tar.gz
The debug tarball contains the results of invoking the vault debug
command.
The following is a tree of output files produced by unpacking the hcdiag-2022-02-24T145046Z/VaultDebug.tar.gz
file.
VaultDebug ├── 2022-02-24T14-50-47Z │ ├── allocs.prof │ ├── block.prof │ ├── goroutine.prof │ ├── goroutines.txt │ ├── heap.prof │ ├── mutex.prof │ ├── profile.prof │ ├── threadcreate.prof │ └── trace.out ├── 2022-02-24T14-50-57Z │ ├── allocs.prof │ ├── block.prof │ ├── goroutine.prof │ ├── goroutines.txt │ ├── heap.prof │ ├── mutex.prof │ └── threadcreate.prof ├── config.json ├── host_info.json ├── index.json ├── metrics.json ├── replication_status.json ├── server_status.json └── vault.log 2 directories, 23 files
VaultDebug
├── 2022-02-24T14-50-47Z
│ ├── allocs.prof
│ ├── block.prof
│ ├── goroutine.prof
│ ├── goroutines.txt
│ ├── heap.prof
│ ├── mutex.prof
│ ├── profile.prof
│ ├── threadcreate.prof
│ └── trace.out
├── 2022-02-24T14-50-57Z
│ ├── allocs.prof
│ ├── block.prof
│ ├── goroutine.prof
│ ├── goroutines.txt
│ ├── heap.prof
│ ├── mutex.prof
│ └── threadcreate.prof
├── config.json
├── host_info.json
├── index.json
├── metrics.json
├── replication_status.json
├── server_status.json
└── vault.log
2 directories, 23 files
The first entry, 2022-02-24T14-50-47Z
is a directory containing runtime profiling information and goroutine data as gathered from the runnning Vault processes with the Go pprof utility.
These profiles are essentially collections of stack traces and their associated metadata. They are most useful when debugging issues by engineers familiar with the related Vault source code.
Here is a breakdown on the contents of each file.
allocs.prof
contains all past memory allocations.block.prof
contains stack traces which led to blocking on synchronization primitives.goroutine.prof
contains traces on all current goroutines.goroutines.txt
contains a listing of all goroutines.heap.prof
contains memory allocation of live objects.mutex.prof
contains stack traces for holders of contended mutexes.profile.prof
contains the CPU profile information.threadcreate.prof
contains stack traces that led to creation of new OS threads.trace.out
contains the CPU trace information.
Visualizing profile information is typically performed with the pprof command by passing in the filename of a .prof
file. If you have an established Go environment, you can use it to examine these files.
You can use the pprof tool in both interactive and non-interactive modes. Here are some example non-interactive invocations of the tool against the example data to familiarize you with some of its outputs.
The first example lists the top 10 entries from the 2022-02-10T16-41-35Z CPU profile:
$ go tool pprof -top 2022-02-24T14-50-47Z/profile.prof | head -n 16 File: vault Type: cpu Time: Feb 10, 2022 at 11:41am (EST) Duration: 10.10s, Total samples = 260ms ( 2.57%) Showing nodes accounting for 260ms, 100% of 260ms total flat flat% sum% cum cum% 80ms 30.77% 30.77% 80ms 30.77% runtime.futex 40ms 15.38% 46.15% 40ms 15.38% runtime.epollwait 20ms 7.69% 53.85% 30ms 11.54% runtime.gentraceback 10ms 3.85% 57.69% 10ms 3.85% compress/flate.(*huffmanBitWriter).writeTokens 10ms 3.85% 61.54% 10ms 3.85% context.WithValue 10ms 3.85% 65.38% 10ms 3.85% runtime.(*spanSet).pop 10ms 3.85% 69.23% 10ms 3.85% runtime.(*traceBuf).byte (inline) 10ms 3.85% 73.08% 10ms 3.85% runtime.casgstatus 10ms 3.85% 76.92% 10ms 3.85% runtime.heapBitsForAddr (inline) 10ms 3.85% 80.77% 10ms 3.85% runtime.makechan
$ go tool pprof -top 2022-02-24T14-50-47Z/profile.prof | head -n 16
File: vault
Type: cpu
Time: Feb 10, 2022 at 11:41am (EST)
Duration: 10.10s, Total samples = 260ms ( 2.57%)
Showing nodes accounting for 260ms, 100% of 260ms total
flat flat% sum% cum cum%
80ms 30.77% 30.77% 80ms 30.77% runtime.futex
40ms 15.38% 46.15% 40ms 15.38% runtime.epollwait
20ms 7.69% 53.85% 30ms 11.54% runtime.gentraceback
10ms 3.85% 57.69% 10ms 3.85% compress/flate.(*huffmanBitWriter).writeTokens
10ms 3.85% 61.54% 10ms 3.85% context.WithValue
10ms 3.85% 65.38% 10ms 3.85% runtime.(*spanSet).pop
10ms 3.85% 69.23% 10ms 3.85% runtime.(*traceBuf).byte (inline)
10ms 3.85% 73.08% 10ms 3.85% runtime.casgstatus
10ms 3.85% 76.92% 10ms 3.85% runtime.heapBitsForAddr (inline)
10ms 3.85% 80.77% 10ms 3.85% runtime.makechan
This shows CPU time and usage for functions in use by Vault.
Another example for examining memory usage would be to us the same command against the heap file instead.
$ go tool pprof -top heap.prof | head -n 15 File: vault Type: inuse_space Time: Feb 10, 2022 at 11:41am (EST) Showing nodes accounting for 40848.57kB, 100% of 40848.57kB total flat flat% sum% cum cum% 28161.83kB 68.94% 68.94% 28161.83kB 68.94% github.com/hashicorp/vault/enthelpers/merkle.NewMerkleSubPage (inline) 6192.12kB 15.16% 84.10% 34353.95kB 84.10% github.com/hashicorp/vault/enthelpers/merkle.(*Tree).getPages 1322kB 3.24% 87.34% 2346.03kB 5.74% github.com/hashicorp/vault/enthelpers/merkle.(*Tree).recoverPages 1024.03kB 2.51% 89.84% 1024.03kB 2.51% github.com/hashicorp/vault/vendor/google.golang.org/protobuf/internal/impl.consumeBytesNoZero 544.67kB 1.33% 91.18% 544.67kB 1.33% github.com/hashicorp/vault/vendor/google.golang.org/protobuf/internal/strs.(*Builder).AppendFullName 521.05kB 1.28% 92.45% 521.05kB 1.28% github.com/hashicorp/vault/vendor/go.etcd.io/bbolt.(*node).put 519.03kB 1.27% 93.72% 519.03kB 1.27% github.com/hashicorp/vault/vendor/github.com/jackc/pgx/pgtype.NewConnInfo (inline) 514.63kB 1.26% 94.98% 514.63kB 1.26% regexp.makeOnePass.func1 512.88kB 1.26% 96.24% 512.88kB 1.26% github.com/hashicorp/vault/vault.(*SystemBackend).raftAutoSnapshotPaths 512.19kB 1.25% 97.49% 512.19kB 1.25% runtime.malg
$ go tool pprof -top heap.prof | head -n 15
File: vault
Type: inuse_space
Time: Feb 10, 2022 at 11:41am (EST)
Showing nodes accounting for 40848.57kB, 100% of 40848.57kB total
flat flat% sum% cum cum%
28161.83kB 68.94% 68.94% 28161.83kB 68.94% github.com/hashicorp/vault/enthelpers/merkle.NewMerkleSubPage (inline)
6192.12kB 15.16% 84.10% 34353.95kB 84.10% github.com/hashicorp/vault/enthelpers/merkle.(*Tree).getPages
1322kB 3.24% 87.34% 2346.03kB 5.74% github.com/hashicorp/vault/enthelpers/merkle.(*Tree).recoverPages
1024.03kB 2.51% 89.84% 1024.03kB 2.51% github.com/hashicorp/vault/vendor/google.golang.org/protobuf/internal/impl.consumeBytesNoZero
544.67kB 1.33% 91.18% 544.67kB 1.33% github.com/hashicorp/vault/vendor/google.golang.org/protobuf/internal/strs.(*Builder).AppendFullName
521.05kB 1.28% 92.45% 521.05kB 1.28% github.com/hashicorp/vault/vendor/go.etcd.io/bbolt.(*node).put
519.03kB 1.27% 93.72% 519.03kB 1.27% github.com/hashicorp/vault/vendor/github.com/jackc/pgx/pgtype.NewConnInfo (inline)
514.63kB 1.26% 94.98% 514.63kB 1.26% regexp.makeOnePass.func1
512.88kB 1.26% 96.24% 512.88kB 1.26% github.com/hashicorp/vault/vault.(*SystemBackend).raftAutoSnapshotPaths
512.19kB 1.25% 97.49% 512.19kB 1.25% runtime.malg
You can also generate SVG based call graphs. For example, to generate a graph of goroutines, you would use a command like this.
$ go tool pprof -web goroutine.prof
$ go tool pprof -web goroutine.prof
This will generate an SVG graphic and open in the default handler for your system, which is usually a web browser. You can learn more about interpreting call graphs in the pprof documentation
TIP: If you are new to pprof, there is an excellent article on pprof that explains it thoroughly.
The second directory 2022-02-24T14-50-47Z
contains the same information gathered 10 seconds later.
The remaining files which make up the debug archive are described here individually:
config.json
is a JSON representation of the current Vault server configuration.host_info.json
contains detailed host resource information about CPU, filesystems, memory, and so on.index.json
contains a summary of the hcdiag run and files gathered.metrics.json
contains Vault Telemetry metrics data.replication_status.json
contains the current Vault server's Enterprise Replication status.server_status.json
contains the output from the seal-status API or that similar to the output ofvault status
CLI.vault.log
contains entries from the Vault server operational log for the duration of the hcdiag run.
»Configuration file
You can also configure hcdiag behavior with a HashiCorp Configuration Language (HCL) formatted file. If you examine the Results.json file under the relevant product key, such as vault, you can find the commands that are executed.
Here is a minimal configuration file that instructs hcdiag to exclude the vault debug
command shown as an example.
product "vault" { excludes = ["vault debug"] }
product "vault" {
excludes = ["vault debug"]
}
If you created this file as diag.hcl
and executed hcdiag as follows, then you could expect the output to resemble the example output.
$ hcdiag -config diag.hcl 2021-08-10T22:02:42.310Z [INFO] hcdiag: Checking product availability 2021-08-10T22:02:42.310Z [INFO] hcdiag: Gathering diagnostics 2021-08-10T22:02:42.310Z [INFO] hcdiag: Running seekers for: product=host 2021-08-10T22:02:42.311Z [INFO] hcdiag: running: seeker=stats 2021-08-10T22:02:42.320Z [INFO] hcdiag: Created Results.json file: dest=temp417162427/Results.json 2021-08-10T22:02:42.321Z [INFO] hcdiag: Created Manifest.json file: dest=temp417162427/Manifest.json 2021-08-10T22:02:42.325Z [INFO] hcdiag: Compressed and archived output file: dest=.
$ hcdiag -config diag.hcl
2021-08-10T22:02:42.310Z [INFO] hcdiag: Checking product availability
2021-08-10T22:02:42.310Z [INFO] hcdiag: Gathering diagnostics
2021-08-10T22:02:42.310Z [INFO] hcdiag: Running seekers for: product=host
2021-08-10T22:02:42.311Z [INFO] hcdiag: running: seeker=stats
2021-08-10T22:02:42.320Z [INFO] hcdiag: Created Results.json file: dest=temp417162427/Results.json
2021-08-10T22:02:42.321Z [INFO] hcdiag: Created Manifest.json file: dest=temp417162427/Manifest.json
2021-08-10T22:02:42.325Z [INFO] hcdiag: Compressed and archived output file: dest=.
You do not have to create the configuration and execute the command again, but if you compare the example output to the previous invocation that you ran, you notice that the excluded Vault Debug information is not present.
»Production usage tips
By default, the hcdiag tool includes files for up to 72 hours back from the current time. You can specify the desired time range using the -include-since
flag.
If you are concerned about impacting performance of your Vault servers, you can specify that the seekers to not run concurrently, and instead be invoked serially with the -serial
flag.
Deploying hcdiag in production involves a workflow similar to the following:
Place the hcdiag binary on a system that is capable of connecting to the Vault server targeted by hcdiag, such as a bastion host or the host itself.
When running with a configuration file and the
-config
flag, ensure that the specified configuration file is readable by the user that executes hcdiag.Ensure that the current directory or that specified by the
dest
flag is writable by the user that executes hcdiag.Ensure connectivity to the HashiCorp products that hcdiag needs to connect to during the run. Export any required environment variables for establishing connection or passing authentication tokens as necessary.
Decide on a duration for information gathering, noting that the default is to gather for up to 72 hours back in server log output. Adjust your needs as necessary with the
-include-since
flag. For example, to include only 24 hours of log output, invoke as:hcdiag -vault -include-since 24h
hcdiag -vault -include-since 24h
Limit what is gathered with the
-includes
flag. For example,-includes /var/log/consul-*,/var/log/nomad-*
instructs hcdiag to only gather logs matching the specified Consul and Nomad filename patterns.Use the
-dryrun
flag to observe what hcdiag will do without anything actually being done for testing configuration and options.
»Scenario cleanup
To clean up from exploring hcdiag, first exit the container.
$ exit
$ exit
Then, stop the container.
$ docker stop dev-vault
$ docker stop dev-vault
»Summary
You learned about the hcdiag tool in the context of using it to gather information from running Vault server environment.
You also learned about the available configuration flags, the configuration file, and production specific tips for using hcdiag.