Consul service mesh allows you to deploy applications into a zero-trust network. A zero-trust network is a network where nothing is trusted automatically: all connections must be verified and authorized. This paradigm is important in microservices and multi-cloud environments where many applications and services are running in the same network.
In this guide you will deploy two services,
api, into Consul's service mesh running on a Kubernetes cluster.
The two services will use Consul to discover each other and communicate over mTLS with
sidecar proxies. This is the first step in deploying application into a zero-trust network.
The two services represent a simple two-tier application made
of a backend
api service and a frontend that communicates with the
api service over HTTP and exposes the results in a web ui.
Kubernetes cluster with Consul service mesh. In the previous guide you used Helm to deploy Consul service mesh and enabled the use of Envoy as a sidecar proxy on a Kubernetes cluster locally on your test machine. You will be using that cluster to test the commands provided in this guide.
kubectl to interact with your Kubernetes cluster and deploy services.
»Deploy services with sidecar proxies in Kubernetes
With the Consul
connectInject option enabled in the
consul-values.yaml file, you have ensured that all the
services deployed in the service mesh, using the exposed annotations, are automatically registered in the Consul catalog.
When you use the above annotation, a sidecar proxy is automatically added to your pod. This proxy will handle inbound and outbound service connections, automatically wrapping and verifying TLS connections. Using local sidecar proxies enables simple application integration.
»Define the services
Move into the blueprint folder. The blueprint folder was automatically created when you ran Shipyard for the first time.
The blueprint folder contains the example files for these services under the
You should use the
»Define the backend service
api.yml file contains the configuration for a deployment of the
apiVersion: apps/v1 kind: Deployment metadata: name: api-deployment-v1 labels: app: api-v1 spec: replicas: 1 selector: matchLabels: app: api-v1 template: metadata: labels: app: api-v1 annotations: "consul.hashicorp.com/connect-inject": "true" spec: containers: - name: api image: nicholasjackson/fake-service:v0.7.8 ports: - containerPort: 9090 env: - name: "LISTEN_ADDR" value: "127.0.0.1:9090" - name: "NAME" value: "api-v1" - name: "MESSAGE" value: "Response from API v1"
»Define the frontend service
web.yml file contains the configuration for a deployment of the web ui service.
# Web frontend apiVersion: apps/v1 kind: Deployment metadata: name: web-deployment labels: app: web spec: replicas: 1 selector: matchLabels: app: web template: metadata: labels: app: web annotations: "consul.hashicorp.com/connect-inject": "true" "consul.hashicorp.com/connect-service-upstreams": "api:9091" spec: containers: - name: web image: nicholasjackson/fake-service:v0.7.8 ports: - containerPort: 9090 env: - name: "LISTEN_ADDR" value: "0.0.0.0:9090" - name: "UPSTREAM_URIS" value: "http://localhost:9091" - name: "NAME" value: "web" - name: "MESSAGE" value: "Hello World" # Service to expose web frontend apiVersion: v1 kind: Service metadata: name: web spec: selector: app: web ports: - name: http protocol: TCP port: 9090 targetPort: 9090
»Understand the upstream concept
In this example, the
web frontend service depends on the
api backend service to operate properly.
You can define this by saying that:
webfrontend service is downstream from the
apiservice is upstream from the
web service definition includes another annotation,
By using the
you are explicitly declaring the upstream for the
Using the format
name:addr, such as
api:9091 will make the
api service available on
web service pod. When the
web service makes a request to
localhost:9091, the sidecar proxy will
establish a secure mTLS connection with the
api service and forward the request.
»Deploy the services
Once the configuration is completed you can deploy the applications by using
kubectl apply -f ./k8s_config/api.yml
kubectl apply -f ./k8s_config/web.yml
After a few seconds you will be able to monitor the application's pods being created and running.
kubectl get pods --all-namespaces ... api-deployment-v1-85cc8c9977-z9sv2 3/3 Running 0 35s web-deployment-76dcfdcc8f-d7f25 3/3 Running 0 32s
You can also confirm the status of the deployment in the Consul UI, http://localhost:18500.
»Access the services
You are able to connect to Consul UI due to the ingress created in the previous guide. Similarly,
to gain access to the ui exposed by the
web service, you will create a new Shipyard ingress using
the example file provided in the blueprint.
shipyard run ./ingresses/web-app.hcl
Ingresses are a good approach to permanently expose a service outside your Kubernetes cluster. For test scenarios, or if you are not using Shipyard you can use
kubectl port-forward web 9090:9090 to create a temporary tunnel between your machine and your test environment.
Once the Shipyard ingress is created you can access the web UI by visiting http://localhost:9090/ui.
In this guide you deployed a two-tier application in the Consul service mesh and
defined the ports and dependencies for each of the services composing your application.
Finally, you configured an ingress to access the frontend for your
This configuration ensures that all the communication between the
web and the
api services is
passing through the Envoy sidecar proxies and, therefore, is encrypted using mTLS.
In the next guide you will learn how to configure intentions to define access control between services in the Consul service mesh and control which services are allowed or not allowed to establish connections.