Stateful Workloads

Stateful Workloads with Portworx

Portworx integrates with Nomad and can manage storage for stateful workloads running inside your Nomad cluster. In this guide, you will install and configure Portworx on each Nomad client node to create a storage pool that tasks can use for storage and replication. You will then deploy an HA MySQL database using that storage with a replication factor of 3, ensuring the data will be replicated on 3 different client nodes.

Prerequisites

To perform the tasks described in this guide, you need to have a Nomad environment with Consul installed. You can use this Terraform environment to provision a sandbox environment. This guide will assume a cluster with one server node and three client nodes.

Verify your storage is adequate

  • Portworx needs an unformatted and unmounted block device that it can fully manage. If you have provisioned a Nomad cluster in AWS using the environment provided in this guide, you already have an external block device ready to use (/dev/xvdd) with a capacity of 50 GB.

  • Ensure your root volume's size is at least 20 GB. If you are using the environment provided in this guide, add the following line to your terraform.tfvars file:

    root_block_device_size = 20
    

Install the MySQL client

You will use the MySQL client to connect to our MySQL database and verify our data. Ensure it is installed on a node with access to port 3306 on your Nomad clients:

Ubuntu:

$ sudo apt install mysql-client

CentOS:

$ sudo yum install mysql

macOS via Homebrew:

$ brew install mysql-client

Install Portworx

Set up the PX-OCI bundle

Run the following command on each client node to set up the PX-OCI bundle:

$ sudo docker run --entrypoint /runc-entry-point.sh \
    --rm -i --privileged=true \
    -v /opt/pwx:/opt/pwx -v /etc/pwx:/etc/pwx \
    portworx/px-enterprise:2.0.2.3

If the command is successful, you will see output similar to the output shown below (the output has been abbreviated):

Unable to find image 'portworx/px-enterprise:2.0.2.3' locally
2.0.2.3: Pulling from portworx/px-enterprise
...
Status: Downloaded newer image for portworx/px-enterprise:2.0.2.3
Executing with arguments:
INFO: Copying binaries...
INFO: Copying rootfs...
...
INFO: Done copying OCI content.
You can now run the Portworx OCI bundle by executing one of the following:

    # sudo /opt/pwx/bin/px-runc run [options]
    # sudo /opt/pwx/bin/px-runc install [options]
...

Configure Portworx OCI bundle

Configure the Portworx OCI bundle on each client node by running the following command (the values provided to the options will be different for your environment):

$ sudo /opt/pwx/bin/px-runc install -k consul://172.31.49.111:8500 \
    -c my_test_cluster -s /dev/xvdd
  • You can use client node you are on with the -k option since Consul is installed alongside Nomad Be sure to provide the -s option with your

  • external block device path

If the configuration is successful, you will see the following output (abbreviated):

INFO[0000] Rootfs found at /opt/pwx/oci/rootfs
INFO[0000] PX binaries found at /opt/pwx/bin/px-runc
INFO[0000] Initializing as version 2.0.2.3-c186a87 (OCI)
...
INFO[0000] Successfully written /etc/systemd/system/portworx.socket
INFO[0000] Successfully written /etc/systemd/system/portworx-output.service
INFO[0000] Successfully written /etc/systemd/system/portworx.service

Since you have created new unit files, please run the following command to reload the systemd manager configuration:

$ sudo systemctl daemon-reload

Start Portworx and check status

Run the following command to start Portworx:

$ sudo systemctl start portworx

Verify the service:

$ sudo systemctl status portworx
● portworx.service - Portworx OCI Container
   Loaded: loaded (/etc/systemd/system/portworx.service; disabled; vendor preset
   Active: active (running) since Wed 2019-03-06 15:16:51 UTC; 1h 47min ago
     Docs: https://docs.portworx.com/runc
  Process: 28230 ExecStartPre=/bin/sh -c /opt/pwx/bin/runc delete -f portworx ||
 Main PID: 28238 (runc)
...

Wait a few moments (Portworx may still be initializing) and then check the status of Portworx using the pxctl command.

$ pxctl status

If everything is working properly, you should see the following output:

Status: PX is operational
License: Trial (expires in 31 days)
Node ID: 07113eef-0533-4de8-b1cf-4471c18a7cda
  IP: 172.31.53.231
   Local Storage Pool: 1 pool
  POOL  IO_PRIORITY  RAID_LEVEL  USABLE  USED     STATUS  ZONE        REGION
  0     LOW          raid0       50 GiB  4.4 GiB  Online  us-east-1c  us-east-1
  Local Storage Devices: 1 device

Once all nodes are configured, you should see a cluster summary with the total capacity of the storage pool (if you're using the environment provided in this guide, the total capacity will be 150 GB since the external block device attached to each client nodes has a capacity of 50 GB):

Cluster Summary
  Cluster ID: my_test_cluster
  Cluster UUID: 705a1cbd-4d58-4a0e-a970-1e6b28375590
  Scheduler: none
  Nodes: 3 node(s) with storage (3 online)
...
Global Storage Pool
  Total Used      :  13 GiB
  Total Capacity  :  150 GiB

Create a Portworx volume

Run the following command to create a Portworx volume that our job will be able to use:

$ pxctl volume create -s 10 -r 3 mysql

You should see output similar to what is shown below:

Volume successfully created: 693373920899724151
  • Please note from the options provided that the name of the volume you created is mysql and the size is 10 GB. You have configured a replication factor of 3

  • which ensures our data is available on all 3 client nodes.

Run pxctl volume inspect mysql to verify the status of the volume:

$ pxctl volume inspect mysql
Volume  :  693373920899724151
  Name               :  mysql
  Size               :  10 GiB
  Format             :  ext4
  HA                 :  3
...
  Replica sets on nodes:
    Set 0
      Node      : 172.31.58.210 (Pool 0)
      Node      : 172.31.51.110 (Pool 0)
      Node      : 172.31.48.98 (Pool 0)
  Replication Status   :  Up

Deploy MySQL

Create the job file

You are now ready to deploy a MySQL database that can use Portworx for storage. Create a file called mysql.nomad and provide it the following contents:

job "mysql-server" {
  datacenters = ["dc1"]
  type        = "service"

  group "mysql-server" {
    count = 1

    restart {
      attempts = 10
      interval = "5m"
      delay    = "25s"
      mode     = "delay"
    }

    task "mysql-server" {
      driver = "docker"

      env = {
        "MYSQL_ROOT_PASSWORD" = "password"
      }

      config {
        image = "hashicorp/mysql-portworx-demo:latest"

        port_map {
          db = 3306
        }

        volumes = [
          "mysql:/var/lib/mysql",
        ]

        volume_driver = "pxd"
      }

      resources {
        cpu    = 500
        memory = 1024

        network {
          port "db" {
            static = 3306
          }
        }
      }

      service {
        name = "mysql-server"
        port = "db"

        check {
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}
  • Please note from the job file that you are using the pxd volume driver that has been configured from the previous steps.

  • The service name is mysql-server which you will use later to connect to the database.

Run the job

Register the job file you created in the previous step with the following command:

$ nomad run mysql.nomad
==> Monitoring evaluation "aa478d82"
    Evaluation triggered by job "mysql-server"
    Allocation "6c3b3703" created: node "be8aad4e", group "mysql-server"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "aa478d82" finished with status "complete"

Check the status of the allocation and ensure the task is running:

$ nomad status mysql-server
ID            = mysql-server
...
Summary
Task Group    Queued  Starting  Running  Failed  Complete  Lost
mysql-server  0       0         1        0       0         0

Write data to MySQL

Connect to MySQL

Using the mysql client (installed earlier), connect to the database and access the information:

$ mysql -h mysql-server.service.consul -u web -p -D itemcollection

The password for this demo database is password.

Consul is installed alongside Nomad in this cluster so you are able to connect using the mysql-server service name you registered with our task in our job file.

Add test data

Once you are connected to the database, verify the table items exists:

mysql> show tables;
+--------------------------+
| Tables_in_itemcollection |
+--------------------------+
| items                    |
+--------------------------+
1 row in set (0.00 sec)

Display the contents of this table with the following command:

mysql> select * from items;
+----+----------+
| id | name     |
+----+----------+
|  1 | bike     |
|  2 | baseball |
|  3 | chair    |
+----+----------+
3 rows in set (0.00 sec)

Now add some data to this table (after you terminate our database in Nomad and bring it back up, this data should still be intact):

mysql> INSERT INTO items (name) VALUES ('glove');

Run the INSERT INTO command as many times as you like with different values.

mysql> INSERT INTO items (name) VALUES ('hat');
mysql> INSERT INTO items (name) VALUES ('keyboard');

Once you you are done, type exit and return back to the Nomad client command line:

mysql> exit
Bye

Destroy the database job

Run the following command to stop and purge the MySQL job from the cluster:

$ nomad stop -purge mysql-server
==> Monitoring evaluation "6b784149"
    Evaluation triggered by job "mysql-server"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "6b784149" finished with status "complete"

Verify no jobs are running in the cluster:

$ nomad status
No running jobs

You can optionally stop the nomad service on whichever node you are on and move to another node to simulate a node failure.

Re-deploy and verify

Using the mysql.nomad job file from earlier, re-deploy the database to the Nomad cluster.

==> Monitoring evaluation "61b4f648"
    Evaluation triggered by job "mysql-server"
    Allocation "8e1324d2" created: node "be8aad4e", group "mysql-server"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "61b4f648" finished with status "complete"

Once you re-connect to MySQL, you should be able to see that the information you added prior to destroying the database is still present:

mysql> select * from items;
+----+----------+
| id | name     |
+----+----------+
|  1 | bike     |
|  2 | baseball |
|  3 | chair    |
|  4 | glove    |
|  5 | hat      |
|  6 | keyboard |
+----+----------+
6 rows in set (0.00 sec)

Summary

In this guide, you deployed a highly-available MySQL server using Portworx. Portworx also has a guide—Portworx on Nomad—that discusses more ways to integrate Portworx with Nomad.