Workshops
Book a 90-minute product workshop led by HashiCorp engineers and product experts during HashiConf Digital Reserve your spot

Stateful Workloads

Stateful Workloads with Container Storage Interface

Nomad's Container Storage Interface (CSI) integration can manage external storage volumes for stateful workloads running inside your cluster. CSI providers are third-party plugins that run as Nomad jobs and can mount volumes created by your cloud provider. Nomad is aware of CSI-managed volumes during the scheduling process, enabling it to schedule your workloads based on the availability of volumes on a specific client.

Each storage provider builds its own CSI plugin, and you can leverage all of them in Nomad. You can launch jobs that claim storage volumes from AWS Elastic Block Storage (EBS) or Elastic File System (EFS) volumes, GCP persistent disks, Digital Ocean droplet storage volumes, or vendor-agnostic third-party providers like Portworx. This also means that the same plugins written by storage providers to support Kubernetes support Nomad as well. You can find a list of plugins in the Kubernetes CSI Developer Documentation.

Unlike Nomad's host_volume feature, CSI-managed volumes can be added and removed from a Nomad cluster without changing the Nomad client configuration.

Using Nomad's CSI integration consists of three core workflows: running CSI plugins, registering volumes for those plugins, and running jobs that claim those volumes. In this guide, you'll run the AWS Elastic Block Storage (EBS) plugin, register an EBS volume for that plugin, and deploy a MySQL workload that claims that volume for persistent storage.

»Prerequisites

To perform the tasks described in this guide, you need:

  • a Nomad environment on AWS with Consul installed. You can use this Terraform environment to provision a sandbox environment. This guide will assume a cluster with one server node and two client nodes.

  • Nomad v0.11.0 or greater

»Install the MySQL client

You will use the MySQL client to connect to our MySQL database and verify our data. Ensure it is installed on a node with access to port 3306 on your Nomad clients:

Ubuntu:

$ sudo apt install mysql-client

CentOS:

$ sudo yum install mysql

macOS via Homebrew:

$ brew install mysql-client

»Deploy an AWS EBS volume

Next, create an AWS EBS volume for the CSI plugin to mount where needed for your jobs using the same Terraform stack you used to create the Nomad cluster.

Add the following new resources to your Terraform stack.

resource "aws_iam_role_policy" "mount_ebs_volumes" {
  name   = "mount-ebs-volumes"
  role   = aws_iam_role.instance_role.id
  policy = data.aws_iam_policy_document.mount_ebs_volumes.json
}

data "aws_iam_policy_document" "mount_ebs_volumes" {
  statement {
    effect = "Allow"

    actions = [
      "ec2:DescribeVolume*",
      "ec2:AttachVolume",
      "ec2:DetachVolume",
    ]
    resources = ["*"]
  }
}

resource "aws_ebs_volume" "mysql" {
  availability_zone = aws_instance.client[0].availability_zone
  size              = 40
}

output "ebs_volume" {
    value = <<EOM
# volume registration
type = "csi"
id = "mysql"
name = "mysql"
external_id = "${aws_ebs_volume.mysql.id}"
access_mode = "single-node-writer"
attachment_mode = "file-system"
plugin_id = "aws-ebs0"
EOM
}

Run terraform plan and terraform apply to create the new IAM policy and EBS volume. Then run terraform output ebs_volume > volume.hcl. You'll use this file later to register the volume with Nomad.

»Notes about the above Terraform configuration

  • The IAM policy document and role policy are being added to the existing instance role for your EC2 instances. This policy will give the EC2 instances the ability to mount the volume you've created in Terraform, but will not give them the ability to create new volumes.

  • The EBS volume resource is the data volume you will attach via CSI later. The output will be used to register the volume with Nomad.

»Deploy the EBS plugin

Plugins for CSI are run as Nomad jobs with a plugin stanza. The official plugin for AWS EBS can be found on GitHub in the aws-ebs-csi-driver repo. It's packaged as a Docker container that you can run with the Docker task driver.

Each CSI plugin supports one or more types: Controllers and Nodes. Node instances of a plugin need to run on every Nomad client node where you want to mount volumes. You'll probably want to run Node plugins instances as Nomad system jobs. Some plugins also require coordinating Controller instances that can run on any Nomad client node.

The AWS EBS plugin requires a controller plugin to coordinate access to the EBS volume, and node plugins to mount the volume to the EC2 instance. You'll create a controller job as a nomad service job and the node job as a Nomad system job.

Create a file for the controller job called plugin-ebs-controller.nomad with the following content.

job "plugin-aws-ebs-controller" {
  datacenters = ["dc1"]

  group "controller" {
    task "plugin" {
      driver = "docker"

      config {
        image = "amazon/aws-ebs-csi-driver:latest"

        args = [
          "controller",
          "--endpoint=unix://csi/csi.sock",
          "--logtostderr",
          "--v=5",
        ]
      }

      csi_plugin {
        id        = "aws-ebs0"
        type      = "controller"
        mount_dir = "/csi"
      }

      resources {
        cpu    = 500
        memory = 256
      }
    }
  }
}

Create a file for the node job named plugin-ebs-nodes.nomad with the following content.

job "plugin-aws-ebs-nodes" {
  datacenters = ["dc1"]

  # you can run node plugins as service jobs as well, but this ensures
  # that all nodes in the DC have a copy.
  type = "system"

  group "nodes" {
    task "plugin" {
      driver = "docker"

      config {
        image = "amazon/aws-ebs-csi-driver:latest"

        args = [
          "node",
          "--endpoint=unix://csi/csi.sock",
          "--logtostderr",
          "--v=5",
        ]

        # node plugins must run as privileged jobs because they
        # mount disks to the host
        privileged = true
      }

      csi_plugin {
        id        = "aws-ebs0"
        type      = "node"
        mount_dir = "/csi"
      }

      resources {
        cpu    = 500
        memory = 256
      }
    }
  }
}

Deploy both jobs with nomad job run plugin-ebs-controller.nomad and nomad job run plugin-ebs-nodes.nomad. It will take a few moments for the plugins to register themselves as healthy with Nomad after the job itself is running. You can check the plugin status with the nomad plugin status command.

Note that the plugin does not have a namespace, even though the jobs that launched it do. Plugins are treated as resources available to the whole cluster in the same way as Nomad clients.

$ nomad job status
ID                         Type     Priority  Status   Submit Date
plugin-aws-ebs-controller  service  50        running  2020-03-20T10:49:13-04:00
plugin-aws-ebs-nodes       system   50        running  2020-03-20T10:49:17-04:00
$ nomad plugin status aws-ebs0
ID                   = aws-ebs0
Provider             = ebs.csi.aws.com
Version              = v0.6.0-dirty
Controllers Healthy  = 1
Controllers Expected = 1
Nodes Healthy        = 2
Nodes Expected       = 2

Allocations
ID        Node ID   Task Group  Version  Desired  Status   Created    Modified
de2929cc  ac41c184  controller  0        run      running  1m26s ago  1m8s ago
d1d4831e  ac41c184  nodes       0        run      running  1m22s ago  1m18s ago
2b815e02  b896731a  nodes       0        run      running  1m22s ago  1m14s ago

»Register the volume

The CSI plugins need to be told about each volume they manage, so for each volume you'll run nomad volume register. Earlier you used Terraform to output a volume.hcl file with the volume definition.

$ nomad volume register volume.hcl
$ nomad volume status mysql
ID                   = mysql
Name                 = mysql
External ID          = vol-0b756b75620d63af5
Plugin ID            = aws-ebs0
Provider             = ebs.csi.aws.com
Version              = v0.6.0-dirty
Schedulable          = true
Controllers Healthy  = 1
Controllers Expected = 1
Nodes Healthy        = 2
Nodes Expected       = 2
Access Mode          = single-node-writer
Attachment Mode      = file-system
Namespace            = default

Allocations
No allocations placed

The volume status output above indicates that the volume is ready to be scheduled, but has no allocations currently using it.

»Deploy MySQL

»Create the job file

You are now ready to deploy a MySQL database that can use Nomad host volumes for storage. Create a file called mysql.nomad and provide it the following contents.

job "mysql-server" {
  datacenters = ["dc1"]
  type        = "service"

  group "mysql-server" {
    count = 1

    volume "mysql" {
      type      = "csi"
      read_only = false
      source    = "mysql"
    }

    restart {
      attempts = 10
      interval = "5m"
      delay    = "25s"
      mode     = "delay"
    }

    task "mysql-server" {
      driver = "docker"

      volume_mount {
        volume      = "mysql"
        destination = "/srv"
        read_only   = false
      }

      env = {
        "MYSQL_ROOT_PASSWORD" = "password"
      }

      config {
        image = "hashicorp/mysql-portworx-demo:latest"
        args = ["--datadir", "/srv/mysql"]

        port_map {
          db = 3306
        }
      }

      resources {
        cpu    = 500
        memory = 1024

        network {
          port "db" {
            static = 3306
          }
        }
      }

      service {
        name = "mysql-server"
        port = "db"

        check {
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}

»Notes about the above job specification

  • The service name is mysql-server which you will use later to connect to the database.

  • The read_only argument is supplied on all of the volume-related stanzas in to help highlight all of the places you would need to change to make a read-only volume mount. Please consult the volume, and volume_mount specifications for more details.

  • For lower-memory instances, you might need to reduce the requested memory in the resources stanza to harmonize with available resources in your cluster.

»Run the job

Register the job file you created in the previous step with the following command.

$ nomad run mysql.nomad
==> Monitoring evaluation "aa478d82"
    Evaluation triggered by job "mysql-server"
    Allocation "6c3b3703" created: node "be8aad4e", group "mysql-server"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "aa478d82" finished with status "complete"

The allocation status will have a section for the CSI volume, and the volume status will show the allocation claiming the volume.

$ nomad alloc status 6c3b3703
CSI Volumes:
ID     Read Only
mysql  false
nomad volume status mysql
ID                   = mysql
Name                 = mysql
External ID          = vol-0b756b75620d63af5
Plugin ID            = aws-ebs0
Provider             = ebs.csi.aws.com
Version              = v0.6.0-dirty
Schedulable          = true
Controllers Healthy  = 1
Controllers Expected = 1
Nodes Healthy        = 2
Nodes Expected       = 2
Access Mode          = single-node-writer
Attachment Mode      = file-system
Namespace            = default

Allocations
ID        Node ID   Task Group    Version  Desired  Status   Created    Modified
6c3b3703  ac41c184  mysql-server  3        run      running  1m40s ago  1m2s ago

»Write data to MySQL

»Connect to MySQL

Using the mysql client (installed earlier), connect to the database and access the information.

$ mysql -h mysql-server.service.consul -u web -p -D itemcollection

The password for this demo database is password.

Consul is installed alongside Nomad in this cluster so you are able to connect using the mysql-server service name you registered with our task in our job file.

»Add test data

Once you are connected to the database, verify the table items exists.

mysql> show tables;
+--------------------------+
| Tables_in_itemcollection |
+--------------------------+
| items                    |
+--------------------------+
1 row in set (0.00 sec)

Display the contents of this table with the following command.

mysql> select * from items;
+----+----------+
| id | name     |
+----+----------+
|  1 | bike     |
|  2 | baseball |
|  3 | chair    |
+----+----------+
3 rows in set (0.00 sec)

Now add some data to this table (after you terminate our database in Nomad and bring it back up, this data should still be intact).

mysql> INSERT INTO items (name) VALUES ('glove');

Run the INSERT INTO command as many times as you like with different values.

mysql> INSERT INTO items (name) VALUES ('hat');
mysql> INSERT INTO items (name) VALUES ('keyboard');

Once you you are done, type exit and return back to the Nomad client command line.

mysql> exit
Bye

»Destroy the database job

Run the following command to stop and purge the MySQL job from the cluster.

$ nomad stop -purge mysql-server
==> Monitoring evaluation "6b784149"
    Evaluation triggered by job "mysql-server"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "6b784149" finished with status "complete"

Verify mysql is no longer running in the cluster.

$ nomad job status mysql
No job(s) with prefix or id "mysql" found

»Re-deploy and verify

Using the mysql.nomad job file [from earlier][create_job], re-deploy the database to the Nomad cluster.

$ nomad run mysql.nomad
==> Monitoring evaluation "61b4f648"
    Evaluation triggered by job "mysql-server"
    Allocation "8e1324d2" created: node "be8aad4e", group "mysql-server"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "61b4f648" finished with status "complete"

Once you re-connect to MySQL, you should be able to verify that the information you added prior to destroying the database is still present.

mysql> select * from items;
+----+----------+
| id | name     |
+----+----------+
|  1 | bike     |
|  2 | baseball |
|  3 | chair    |
|  4 | glove    |
|  5 | hat      |
|  6 | keyboard |
+----+----------+
6 rows in set (0.00 sec)

»Cleanup

Once you have completed this guide, you should perform the following cleanup steps.

  • Stop and purge the mysql-server job.

  • Unregister the EBS volume from Nomad with nomad volume deregister mysql.

  • Stop and purge the plugin-aws-ebs-controller and plugin-aws-ebs-nodes job.

  • Destroy the EBS volume with terraform destroy.

»Summary

In this guide, you deployed a CSI plugin to Nomad, registered an AWS EBS volume for that plugin, and created a job that mounted this volume to a Docker MySQL container that wrote data that persisted beyond the job's lifecycle.