HashiCorp Cloud Platform (HCP) is a fully managed platform offering HashiCorp Products as a Service (HPaaS) to automate infrastructure on any cloud.
This tutorial will cover the process required to connect an EKS Cluster to HCP Consul on AWS.
»Prerequisites
The following prerequisites are required:
- An HCP HashiCorp Virtual Network (HVN)
- An HCP Consul deployment
- AWS CLI
- kubectl
- helm
- jq
- git
- An EKS Cluster deployed in the VPC associated with your HVN
For this tutorial, you will need to ensure that you have authenticated with the AWS CLI, and that the CLI is targeting the region where you have created your EKS cluster. Review the AWS documentation for instructions on how to configure the AWS CLI.
To ensure that communication between your HCP Consul cluster servers and the client agent running in your EKS cluster is possible, you must complete the steps detailed in either the manual deployment or in the Terraform deployment tutorial. It is also mandatory that your AWS EKS cluster is deployed in the VPC that has been associated with your HVN. Otherwise, there will not be a configured peering connection, and the control plane running on the HCP Consul cluster servers will not be able to communicate with the data plane provided by the client agents that you will be installing to the EKS cluster.
»Configure development host
Kubernetes stores cluster connection information in a special file called kubeconfig
.
You can retrieve the Kubernetes configuration settings for your EKS cluster and
merge them into your local kubeconfig
file by issuing the following command.
$ aws eks --region [your-region] update-kubeconfig --name [your-cluster-name]
You can use the HCP Portal to retrieve the client configuration information you need to connect your EKS cluster client agents to your Consul cluster. Navigate to the Consul resource page in the HCP portal, and then select the Consul cluster you want to join your EKS cluster client agents to. Click the Download client config button to download a zip archive that contains the necessary files to join your client agents to the cluster. The archive includes a default client configuration and certificate. Both should be considered secrets, and should be kept in a secure location.
Unzip the client config package into the current working directory, and then use
ls
to confirm that both the client_config.json
and ca.pem
files are available.
$ ls
ca.pem client_config.json
From this same screen in the HCP UI, click the "Generate token" button and
then click "Copy" from the dialog box. A global-management root token
is now in your clipboard. Set this token to the CONSUL_HTTP_TOKEN
environment
variable on your development host so that you can reference it later in the tutorial.
$ export CONSUL_HTTP_TOKEN=[your-token]
»Configure Consul secrets
Consul Service on HCP is secure by default. This means that client agents will need to be configured with the gossip encryption key, the Consul CA cert, and a root ACL token. All three of these secrets will need to be stored in the Kubernetes secrets engine so that they can be referenced and retrieved during the helm chart installation.
Use the ca.pem
file in the current directory to create a Kubernetes secret to
store the Consul CA certificate.
$ kubectl create secret generic "consul-ca-cert" --from-file='tls.crt=./ca.pem'
secret/consul-ca-cert created
The Consul gossip encryption key is embedded in the client_config.json
file that
you downloaded and extracted into your current directory. Issue the following
command to create a Kubernetes secret that stores the Consul gossip key encryption
key. The following command uses jq
to extract the value from the client_config.json
file.
$ kubectl create secret generic "consul-gossip-key" --from-literal="key=$(jq -r .encrypt client_config.json)"
secret/consul-gossip-key created
The last secret you need to add is an ACL bootstrap token. You can use the one you
set to your CONSUL_HTTP_TOKEN
environment variable earlier. Issue the following
command to create a Kubernetes secret to store the bootstrap ACL token.
Note: If you are configuring a production environment, you should create a client token with a minimum set of privileges. For an in depth review of how to configure ACLs for Consul, refer to the Secure Consul with Access Control Lists tutorial or the official documentation.
$ kubectl create secret generic "consul-bootstrap-token" --from-literal="token=${CONSUL_HTTP_TOKEN}"
secret/consul-bootstrap-token created
»Install Consul clients on EKS
This uses the official consul-helm chart to install the Consul client agents to your EKS cluster.
If you have not done so already, add the HashiCorp Helm repository:
$ helm repo add hashicorp https://helm.releases.hashicorp.com && helm repo update
"hashicorp" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "hashicorp" chart repository
Update Complete. ⎈ Happy Helming!⎈
Now you will extract some more configuration values from the client_config.json
file and set them to environment variables that can be used to generate your Helm
values file. Issue the following command to set the DATACENTER
environment variable.
$ export DATACENTER=$(jq -r .datacenter client_config.json)
Extract the private server URL from the client config so that it can be set
in the Helm values file as the externalServers:hosts
entry. This value will be
passed as the retry-join
option to the Consul clients.
$ export RETRY_JOIN=$(jq -r --compact-output .retry_join client_config.json)
Extract the public server URL from the client config so that it can be set
the Helm values file as the k8sAuthMethodHost
entry.
Note: The following script relies on your cluster matching your current-context
name. If you have created an alias for your context, or the current-context name
does not match the cluster name for any other reason, you must manually set the
CONSUL_HTTP_ADDR
to the API server URL of your EKS cluster. You can use
kubectl config view
to view your cluster, and retrieve API server URL.
$ export CONSUL_HTTP_ADDR=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"$(kubectl config current-context)\")].cluster.server}")
Validate that all of your environment variables have been set.
$ echo $DATACENTER && \
echo $RETRY_JOIN && \
echo $CONSUL_HTTP_ADDR
Example output:
dc1
["learn-hcp-eks-cluster.private.consul.00000000-0000-0000-0000-000000000000.aws.hashicorp.cloud"]
https://000000000000.gr7.us-west-2.eks.amazonaws.com
Note: If any of these environment variables are not correctly set, the following script will generate an incomplete Helm values file, and the Consul Helm installation will not succeed.
Generate the Helm values file. Notice that this configuration sets up an ingress gateway. This will be covered in more depth later.
$ cat > config.yaml << EOF
global:
name: consul
enabled: false
datacenter: ${DATACENTER}
acls:
manageSystemACLs: true
bootstrapToken:
secretName: consul-bootstrap-token
secretKey: token
gossipEncryption:
secretName: consul-gossip-key
secretKey: key
tls:
enabled: true
enableAutoEncrypt: true
caCert:
secretName: consul-ca-cert
secretKey: tls.crt
externalServers:
enabled: true
hosts: ${RETRY_JOIN}
httpsPort: 443
useSystemRoots: true
k8sAuthMethodHost: ${CONSUL_HTTP_ADDR}
client:
enabled: true
join: ${RETRY_JOIN}
connectInject:
enabled: true
controller:
enabled: true
ingressGateways:
enabled: true
defaults:
replicas: 1
gateways:
- name: ingress-gateway
service:
type: LoadBalancer
EOF
Validate that the config file is populated correctly.
$ more config.yaml
Install the HashiCorp Consul Helm chart:
$ helm install consul -f config.yaml hashicorp/consul --version "0.30.0"
NAME: consul
...TRUNCATED...
$ helm status consul
$ helm get all consul
Once the helm install
command completes, verify the Consul pods have been
successfully deployed by issuing kubectl get pods
.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-8pcg8 1/1 Running 0 2m38s
consul-connect-injector-webhook-deployment-7b6988cdcb-94wpr 1/1 Running 0 2m38s
consul-controller-75bd46b7fd-56fdv 1/1 Running 0 2m38s
consul-ingress-gateway-55c76678f8-rn8pz 2/2 Running 0 2m38s
consul-rrhc8 1/1 Running 0 2m38s
consul-stjn5 1/1 Running 0 2m38s
consul-webhook-cert-manager-654454bd54-jjvw2 1/1 Running 0 2m38s
»Deploy an example workload
Now that the clients have been deployed, it is time to deploy an application workload. This tutorial will use the HashiCups demo application. Issue the following command to clone the repository to the development host.
$ git clone https://github.com/hashicorp/learn-consul-kubernetes.git
Now, checkout the tagged version verified for this tutorial.
git checkout tags/v0.0.6
Change directory into the example repository.
cd learn-consul-kubernetes/consul-client-eks
The repository contains the necessary yaml files to deploy the application. Issue the following command to deploy the application to your EKS cluster.
$ kubectl apply -f ./hashicups
Example output:
service/frontend created
serviceaccount/frontend created
configmap/nginx-configmap created
deployment.apps/frontend created
service/product-api-service created
serviceaccount/product-api created
configmap/db-configmap created
deployment.apps/product-api created
service/postgres created
serviceaccount/postgres created
deployment.apps/postgres created
service/public-api created
serviceaccount/public-api created
deployment.apps/public-api created
serviceintentions.consul.hashicorp.com/ingress-gateway-to-frontend created
serviceintentions.consul.hashicorp.com/frontend-to-public-api created
serviceintentions.consul.hashicorp.com/public-api-to-product-api created
serviceintentions.consul.hashicorp.com/product-api-to-postgres created
»Register a Consul ingress gateway
In Consul, an ingress gateway is a kind of resources called a Config Entry. An ingress gateway is an endpoint that allows external traffic to reach specific resources, such as the application user interface, that are hosted within the datacenter.
Use the following command to create a file named ingress-gateway.yaml
. You
will use this to register an ingress gateway with Consul. Notice the Name
value matches the ingressGateways:gateways:name
entry in config.yaml
.
These fields must match.
$ cat > ingress-gateway.yaml << EOF
apiVersion: consul.hashicorp.com/v1alpha1
kind: IngressGateway
metadata:
name: ingress-gateway
spec:
listeners:
- port: 8080
protocol: http
services:
- name: frontend
EOF
Now, register the config entry with Consul.
$ kubectl apply -f ./ingress-gateway.yaml
ingressgateway.consul.hashicorp.com/ingress-gateway created
»Create the necessary intentions
Since HCP Consul on AWS is secure by default, the datacenter is created with a "default deny" intention in place. This means that, by default, no services can interact with each other until an operator explicitly allows them to do so by creating intentions for each inter-service operation they wish to allow.
As of Consul 1.9, service-intentions
config entries can be created using CRDs.
Use the following command to create a file named service-intentions.yaml
.
$ cat > service-intentions.yaml << EOF
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: ingress-gateway-to-frontend
spec:
destination:
name: frontend
sources:
- name: ingress-gateway
action: allow
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: frontend-to-public-api
spec:
destination:
name: public-api
sources:
- name: frontend
action: allow
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: public-api-to-product-api
spec:
destination:
name: product-api
sources:
- name: public-api
action: allow
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: product-api-to-postgres
spec:
destination:
name: postgres
sources:
- name: product-api
action: allow
EOF
Run the following command to create the service-intentions
config entries the
HashiCups application requires to run.
$ kubectl apply -f ./service-intentions.yaml
serviceintentions.consul.hashicorp.com/ingress-gateway-to-frontend created
serviceintentions.consul.hashicorp.com/frontend-to-public-api created
serviceintentions.consul.hashicorp.com/public-api-to-product-api created
serviceintentions.consul.hashicorp.com/product-api-to-postgres created
»Access the HashiCups UI
With the intentions in place, all that remains is to retrieve the public URL and port of the ingress gateway. Do this by retrieving a list of services.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-ingress-gateway ClusterIP 10.0.53.153 40.36.23.19 8080/TCP 4h43m
...TRUNCATED
Use the following command to verify you have successfully deployed the application.
$ INGRESS_GATEWAY=$(kubectl get svc/consul-ingress-gateway -o json | jq -r '.status.loadBalancer.ingress[0].hostname') && \
echo "Connecting to \"$INGRESS_GATEWAY\"" && \
curl -H "Host: frontend.ingress.consul" "http://$INGRESS_GATEWAY:8080"
Example output:
Connecting to "a32115aa6b19546e188afc44fbb8f3b3-1889150178.us-west-2.elb.amazonaws.com"
<!doctype html>
...TRUNCATED...
</html>
This validates that Consul service discovery is working, because the services are able to resolve the upstreams. This also validates that Consul service mesh is working, because the intentions that were created are allowing services to interact with one another.
»Next steps
In this tutorial, you connected Consul clients on EKS to HCP Consul and deployed a demo application. To keep learning about Consul's features, and for step-by-step examples of how to perform common Consul tasks, complete one of the following tutorials.
- Explore the Consul UI
- Review recommend practices for Consul on Kubernetes
- Deploy a metrics pipeline with Prometheus and Grafana
If you encounter any issues, please contact the HCP team at support.hashicorp.com.