Red Hat OpenShift is a distribution of the Kubernetes platform that provides a number of usability and security enhancements.
In this tutorial you will:
- Deploy an OpenShift cluster
- Deploy a Consul datacenter
- Access the Consul UI
- Use the Consul CLI to inspect your environment
- Decommission the OpenShift environment
Security Warning This tutorial is not for production use. The chart was installed with an insecure configuration of Consul. Refer to the Secure Consul and Registered Services on Kubernetes tutorial to learn how you can secure Consul on Kubernetes in production.
To complete this tutorial you will need:
- Access to a Kubernetes cluster deployed with OpenShift
- A text editor
- Basic command line access
- A Red Hat account
- The Consul CLI
- CodeReady Containers
This tutorial was tested with:
- CodeReady Containers 1.17.0+99f5c87
- Helm v3.2.1
- consul-helm 0.25.0
- Consul 1.8.4
»Download Helm chart
If you have not already done so, download the latest official consul-helm chart now.
$ helm repo add hashicorp https://helm.releases.hashicorp.com "hashicorp" has been added to your repositories
»Verify chart version
To ensure you have version
0.25.0 of the Helm chart, search your local repo.
$ helm search repo hashicorp/consul NAME CHART VERSION APP VERSION DESCRIPTION hashicorp/consul 0.25.0 1.8.4 Official HashiCorp Consul Chart
If the correct version is not displayed in the output, try updating your helm repo.
$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "hashicorp" chart repository
CodeReady Containers (CRC) can be used to deploy a preconfigured OpenShift 4.1 or newer cluster to your local computer for development and testing. CRC is bundled as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Windows 10. CRC is the quickest way to get started building OpenShift clusters. It is designed to run on a local computer to simplify setup and emulate the cloud development environment locally with all the tools needed to develop container-based apps. While we use CRC in this tutorial, the Consul Helm deployment process will work on any OpenShift cluster and is production ready.
After installing CodeReady Containers, issue the following command to setup your environment.
$ crc setup INFO Checking if oc binary is cached INFO Checking if podman remote binary is cached INFO Checking if goodhosts binary is cached INFO Checking if CRC bundle is cached in '$HOME/.crc' INFO Checking minimum RAM requirements INFO Checking if running as non-root INFO Checking if HyperKit is installed INFO Checking if crc-driver-hyperkit is installed INFO Checking file permissions for /etc/hosts INFO Checking file permissions for /etc/resolver/testing Setup is complete, you can now run 'crc start' to start the OpenShift cluster
Once the setup is complete, you can start the CRC service with the following command. The command will perform a few system checks to ensure your system meets the minimum requirements and will then ask you to provide an image pull secret. You should have your Red Hat account open so that you can easily copy your image pull secret when prompted.
$ crc start INFO Checking if oc binary is cached INFO Checking if podman remote binary is cached INFO Checking if goodhosts binary is cached INFO Checking minimum RAM requirements INFO Checking if running as non-root INFO Checking if HyperKit is installed INFO Checking if crc-driver-hyperkit is installed INFO Checking file permissions for /etc/hosts INFO Checking file permissions for /etc/resolver/testing ? Image pull secret [? for help]
Next, paste the image pull secret into the terminal and press enter.
INFO Loading bundle: crc_hyperkit_4.5.14.crcbundle ... INFO Checking size of the disk image /Users/derekstrickland/.crc/cache/crc_hyperkit_4.5.14/crc.qcow2 ... ...TRUNCATED... To access the cluster, first set up your environment by following 'crc oc-env' instructions. Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'. To login as an admin, run 'oc login -u kubeadmin -p
https://api.crc.testing:6443'. To access the cluster, first set up your environment by following 'crc oc-env' instructions. You can now run 'crc console' and use these credentials to access the OpenShift web console.
Notice that the output instructs you to configure your
oc-env, and also includes
a login command and secret password. The secret is specific to your installation.
Make note of this command, as you will use it to login to CRC on your development
»Configure CRC environment
Next, configure the environment as instructed by CRC using the following command.
$ eval $(crc oc-env)
»Login to the CRC cluster
Next, use the login command you made note of before to authenticate with the CRC cluster.
Note You will have to replace the secret password below with the value output by CRC.
$ oc login -u kubeadmin -p <redacted> https://api.crc.testing:6443 Login successful. You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default".
»Create a new project
Now, create a new project for this tutorial.
$ oc new-project consul Now using project "consul" on server "https://api.crc.testing:6443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app ruby~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
Validate that your CRC setup was successful with the following command.
$ kubectl cluster-info Kubernetes master is running at https://api.crc.testing:6443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You can now deploy a complete Consul datacenter using the official Helm chart. Review the Helm chart docs to learn more about available settings.
»Helm chart configuration
To customize your deployment, you can pass a YAML configuration file to be used during the deployment.
Any values specified in the values file will override the Helm chart's default settings.
The following example file sets the
global.openshift.enabled entry to true,
which is required to operate Consul on OpenShift. Use this command to generate a
config.yaml that you will reference in the
helm install command later.
$ cat > config.yaml << EOF global: name: consul datacenter: dc1 openshift: enabled: true server: replicas: 1 bootstrapExpect: 1 disruptionBudget: enabled: true maxUnavailable: 0 client: enabled: true grpc: true ui: enabled: true connectInject: enabled: true default: true controller: enabled: true EOF
»Install Consul with Helm
Now, issue the
helm install command. The following command specifies that the
- Use the custom values file you created in the last step
- Use the
hashicorp/consulchart you downloaded earlier
- Set your Consul installation name to
consul-helmchart version 0.25.0
$ helm install -f config.yaml consul hashicorp/consul --version "0.25.0" --wait
The output will be similar to the following.
NAME: hashicorp ... $ helm status hashicorp $ helm get all hashicorp
kubectl get pods to verify your installation.
$ watch kubectl get pods NAME READY STATUS RESTARTS AGE consul-c74zv 1/1 Running 0 10m consul-connect-injector-webhook-deployment-5f7d4cd45-vxmsl 1/1 Running 0 10m consul-controller-7c884544f8-nj9lt 1/1 Running 0 10m consul-server-0 1/1 Running 0 10m consul-webhook-cert-manager-7fcf99885-rqmm6 1/1 Running 0 10m
Once all pods have a status of
CTRL-C to stop the watch.
»Accessing the Consul UI
Now that Consul has been deployed, you can access the Consul UI to verify that the Consul installation was successful, and that the environment is healthy.
»Expose the UI service to the host
Since the application is running on your local development host, you can expose
the Consul UI to the development host using
kubectl port-forward. The UI and the
HTTP API Server run on the
consul-server-0 pod. Issue the following command to
expose the server endpoint at port
8500 to your local development host.
$ kubectl port-forward consul-server-0 8500:8500 Forwarding from 127.0.0.1:8500 -> 8500 Forwarding from [::1]:8500 -> 8500
http://localhost:8500 in a new browser tab, and you should observe a
page that looks similar to the following.
»Accessing Consul with the CLI and HTTP API
To access Consul with the CLI, set the
CONSUL_HTTP_ADDR following environment variable on the development host so that the Consul CLI knows which Consul server to interact with.
$ export CONSUL_HTTP_ADDR=http://127.0.0.1:8500
You should be able to issue the
consul members command to view all available
Consul datacenter members.
$ consul members Node Address Status Type Build Protocol DC Segment consul-server-0 10.116.0.78:8301 alive server 1.8.4 2 dc1 <all> crc-j55b9-master-0 10.116.0.77:8301 alive client 1.8.4 2 dc1 <default>
You can use the same URL to make HTTP API requests with your custom code.
»Decommission the environment
Now that you have completed the tutorial, you should decommission the CRC environment.
CTRL-C in the terminal to stop the port forwarding process.
First, stop the running cluster.
$ crc stop
INFO Stopping the OpenShift cluster, this may take a few minutes... Stopped the OpenShift cluster
Next, issue the following command to delete the cluster.
$ crc delete
The CRC CLI will ask you to confirm that you want to delete the cluster.
Do you want to delete the OpenShift cluster? [y/N]:
y to confirm.
Deleted the OpenShift cluster
In this tutorial you created a Red Hat OpenShift cluster, and installed Consul to the cluster.
- Deployed an OpenShift cluster
- Deployed a Consul datacenter
- Accessed the Consul UI
- Used the Consul CLI to inspect your environment
- Decommissioned the environment
It is highly recommended that you properly secure your Kubernetes cluster and that you understand and enable the recommended security features of Consul. Refer to the Secure Consul and Registered Services on Kubernetes tutorial to learn how you can deploy an example workload, and secure Consul on Kubernetes for production.
For more information on the Consul Helm chart configuration options, review the consul-helm chart documentation.