Consul on Elastic Container Service (ECS) integrates with Consul on Elastic Cloud Compute (EC2) to provide you with a complete service mesh ecosystem. Empowering your AWS ECS tasks with Consul service mesh connectivity enables you to take advantage of features such as zero-trust-security, intentions, observability, traffic policy, and more.
In this tutorial, you will use Terraform to create an ECS cluster, an example ECS application, and various AWS services. You can choose a learning path to either connect the ECS services with your existing Consul server on EC2 or deploy a complete environment that includes a standalone Consul server on EC2. This environment will be used to highlight the ease of deployment, simplified scalability, and reduced operational overhead gained by utilizing this pattern.
Specifically, you will:
- Customize the Terraform environment deployment script
- Deploy AWS resources using the Terraform script
- Inspect your environment using the Consul UI
- Explore the sample application UI
- Enable service mesh networking with Consul Intentions
- Decommission your environment with Terraform
While this tutorial uses elements that are not suitable for production environments including a development-grade Consul datacenter and lack of redundancy within the architecture, it will teach you the core concepts for deploying and interacting with a service mesh using AWS Elastic Container Service (ECS) and Consul on Elastic Cloud Compute (EC2). Refer to the Consul Reference Architecture for Consul best practices and the AWS Well-Architected Documentation for AWS best practices.
To complete this tutorial you will need the following.
- Basic command line access
- Terraform v1.0.0+ installed
- Git installed
- AWS account and associated credentials that allow you to create resources.
If you don't have AWS Access Credentials, create your AWS Access Key ID and Secret Access Key by navigating to your service credentials in the IAM service on AWS. Click "Create access key" on that page to view your
AWS_SECRET_ACCESS_KEY. You will need these values later.
»Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp/learn-consul-terraform.git
Change into the directory with the newly cloned repository. This directory contains the complete configuration files.
$ cd learn-consul-terraform/
Check out the
v0.5 tag of the repository.
$ git checkout v0.5
Change directory to the ECS module.
$ cd datacenter-deploy-ecs-ec2/
»Create and configure credential resources
»Create a Terraform configuration file for your secrets
Terraform will utilize your unique credentials to build a complete Elastic Cloud Compute (EC2) Consul cluster and example application in Elastic Container Service (ECS).
Create a file named
terraform.tfvars in your working directory and copy the following configuration into the file.
lb_ingress_ip = "YOUR_PUBLIC_IP" region = "us-east-1" name = "learn-hcp" consul_version = "1.10.3"
Replace the placeholders with your values and save the file.
To learn more about each of the Terraform attributes, see the respective resource documentation in the Terraform registry.
Note: By default, secrets created by AWS Secrets Manager require 30 days before they can be deleted. If this tutorial is destroyed and recreated, a name conflict error will occur for these secrets. This can be resolved by changing the value of
name in your
»Configure AWS authentication
Configure AWS credentials for your environment so that Terraform can authenticate with AWS and create resources.
To do this with IAM user authentication, set your AWS access key ID as an environment variable.
$ export AWS_ACCESS_KEY_ID="<YOUR_AWS_ACCESS_KEY_ID>"
Now set your secret key.
$ export AWS_SECRET_ACCESS_KEY="<YOUR_AWS_SECRET_ACCESS_KEY>"
If you have temporary AWS credentials, you must also add your
AWS_SESSION_TOKEN as an environment variable. See the AWS Provider Documentation for more details.
Tip: If you don't have access to IAM user credentials, use another authentication method described in the AWS provider documentation.
»Explore the Terraform manifest files
The Terraform manifest files used in this tutorial deploy various resources that enable your fully managed service mesh ecosystem. Below is the purpose of each Terraform manifest file.
consul-server.tf- AWS EC2 Consul cluster deployment resources.
data.tf- Data sources that allow Terraform to use information defined outside of Terraform.
ecs-clusters.tf- AWS ECS cluster deployment resources.
ecs-services.tf- AWS ECS service deployment resources.
iam.tf- AWS IAM policy and role resources.
load-balancer.tf- AWS Application Load Balancer (ALB) deployment resources.
logging.tf- AWS Cloudwatch logging configuration.
modules.tf- AWS ECS task application definitions.
outputs.tf- Unique values output after Terraform successfully completes a deployment.
providers.tf- AWS and HCP provider definitions for Terraform.
secrets-manager.tf- AWS Secrets Manager configuration.
security-groups.tf- AWS Security Group port management definitions.
variables.tf- Parameter definitions used to customize unique user environment attributes.
vpc.tf- AWS Virtual Private Cloud (VPC) deployment resources.
terraform.tfvars- Your unique credentials and environment attributes (created in the previous step).
scripts/consul-server-init.sh- A bootstrap script for initializing Consul on an EC2 instance.
Note: By default, the
consul-server.tf file creates a single node Consul server. For production, we recommend using at least a three-node Consul datacenter. Check out the Consul Reference Architecture guide to learn more.
»Deploy the Consul + ECS environment
With the Terraform manifest files and your custom credentials file, you are now ready to deploy your infrastructure.
terraform init command from your working directory to download the necessary providers and initialize the backend.
$ terraform init Initializing the backend... Initializing provider plugins... ... Terraform has been successfully initialized! ...
Once Terraform has been initialized, you can verify the resources that will
be created using the
$ terraform plan An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: ...
Finally, you can deploy the resources using the
$ terraform apply ... Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
Remember to confirm the run by entering
Once you confirm, it will take a few minutes to complete the
deploy. Terraform will print the following output if the deployment is successful.
Apply complete! Resources: 72 added, 0 changed, 0 destroyed.
deploy could take up to 10 minutes to complete. Feel free to grab a cup of
coffee while waiting for the cluster to complete initialization or learn more about the Raft protocol in a fun, interactive way.
»Perform testing procedures
»Access the Consul UI
Once your resources have been deployed, three unique values will be output to your console. Access the Consul UI by opening the
Consul_ui_address value in your browser. Login to the secure Consul instance with the generated
acl_bootstrap_token - This will authorize you to interact with the Consul UI.
In your Consul UI, open the menu item labeled Services in the left of the screen. Notice the informational text in service mesh with proxy on each ECS service.
Note: If the expected services are not displayed when you log into the Consul UI, refresh the page.
»Explore the sample application
One of the ECS service tasks defined in this environment deploys the application
fake-service, a Consul client agent, and an Envoy sidecar proxy in your ECS cluster.
Visit the unique
client_lb_address URL that was output by Terraform after your run to see the deployed
Notice the lack of communication between the two services. This is due to the
deny by default service mesh communication behavior.
»Enable service mesh networking
Consul Intentions are used to control which services may establish connections or make requests.
In your Consul UI, open the menu item labeled Intentions in the left of the screen. Click the Create button in the top right to create an intention.
Set the source service as hcp-ecs-example-client-app, the destination service as hcp-ecs-example-server-app, both namespace fields as default, and communication behavior to Allow. Click the Save button in the bottom left once complete.
fake-service application at the unique
client_lb_address URL that was output by Terraform after your run. Notice that the two services are now able to communicate with each other.
You have successfully deployed a Consul environment across ECS and EC2 using Terraform. Within this environment, you enabled service mesh communication with Consul intentions.
terraform destroy command to clean up the resources you created.
$ terraform destroy ... Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value:
Remember to confirm by entering
Once you confirm, it will take a few minutes to complete the removal. Terraform will print the following output if the command is successful.
Destroy complete! Resources: 72 destroyed.
In this tutorial you used Terraform to deploy a service mesh with AWS Elastic Container Service (ECS) and Consul on Elastic Cloud Compute (EC2). You accomplished this task using the Terraform AWS provider. You also learned how to enable service mesh communication between services using Consul intentions.
You can find the full documentation for the HashiCorp Cloud Platform and AWS providers in the Terraform registry documentation.
To get additional hands-on experience with Consul's service discovery and service mesh features, you can follow these guides to connect a Consul client deployed in a virtual machine or on Elastic Kubernetes Service (EKS).
If you encounter any issues, please contact the HCP team at support.hashicorp.com.