HashiCorp Cloud Platform (HCP) is a fully managed platform offering HashiCorp Products as a Service (HPaaS) to automate infrastructure on any cloud.
This tutorial will cover the process required to connect an Amazon ECS cluster using EC2 instances to HCP Consul on AWS.
»Prerequisites
- An HCP Consul deployment with a public endpoint (for debugging purposes only)
- Consul 1.9+
- AWS CLI
- jq
- An Amazon VPC with public and private subnets
- An Amazon ECS Cluster using EC2 Instances
- A set of three Amazon ECS Container instances with
awsvpc
mode and AWS Secrets Manager enabled. Ensure the following cluster attributes are set:
For this tutorial, you will need to ensure your AWS CLI is logged in with your credentials, and is targeting the region where you have created your ECS cluster. Review the AWS documentation for instructions on how to configure the AWS CLI.
To enable communication between the Consul control plane and the ECS cluster, you will need to complete the process outlined in the manual deployment tutorial or in the Terraform deployment tutorial.
NOTE: This tutorial uses the joatmon08/consul-ecs image for the Consul clients and proxy sidecars. This image is not officially supported by HashiCorp. It abstracts some of the configuration for this tutorial and it is not recommended for production use. Examine the source code on GitHub to customize additional parameters.
»Allow Amazon ECS to communicate with HCP Consul
The container instances that make up the Amazon ECS cluster need to be able to communicate with HCP Consul over the required Consul ports.
Set the name of your ECS cluster as the ECS_CLUSTER_NAME
environment variable
so you can reference it in commands throughout the tutorial.
$ export ECS_CLUSTER_NAME=<ecs-cluster-name>
Retrieve the ID of the security group associated with your ECS container instances,
and set it to the ECS_SECURITY_GROUP_ID
environment variable.
$ export ECS_SECURITY_GROUP_ID=$(aws ec2 describe-instances \
--filters "Name=tag:Cluster,Values=${ECS_CLUSTER_NAME}" \
--query 'Reservations[0].Instances[0].SecurityGroups[0].GroupId' \
--output text)
Verify that the security group for the ECS container instances include the following rules that allow LAN, Serf, RPC, HTTPS, and GRPC traffic.
$ aws ec2 describe-security-groups --output text --group-ids ${ECS_SECURITY_GROUP_ID}
IPPERMISSIONS 80 udp 80
IPRANGES 172.25.16.0/20 Consul HTTP
IPPERMISSIONS 8300 tcp 8300
IPRANGES 172.25.16.0/20 HCP Consul Server RPC
IPPERMISSIONS 0 tcp 65535
USERIDGROUPPAIRS Allow all TCP traffic between ECS container instances ${ECS_SECURITY_GROUP_ID}
IPPERMISSIONS 8301 udp 8301
IPRANGES 172.25.16.0/20 Consul LAN Serf (udp)
IPPERMISSIONS 443 udp 443
IPRANGES 172.25.16.0/20 Consul HTTPS
IPPERMISSIONS 8301 tcp 8301
IPRANGES 172.25.16.0/20 Consul LAN Serf (tcp)
»Configure development host
Now, download the Consul client configuration from the HCP portal by clicking the "Download client config" button from within the portal.
Unzip the client configuration file to a directory named client_config
.
$ unzip client_config_bundle_consul_*.zip -d client_config
Archive: client_config_bundle_consul_*.zip
inflating: client_config/client_config.json
inflating: client_config/ca.pem
Change directories to client_config
.
$ cd client_config
Confirm that there are both client_config.json
and ca.pem
files.
$ ls
ca.pem client_config.json
From this same screen in the HCP UI, click the "Generate token" button and then click "Copy" from the dialog box. A global-management root token is now in your clipboard.
Set it to the CONSUL_HTTP_TOKEN
environment variable on your
development host so that you can reference it later in the tutorial.
$ export CONSUL_HTTP_TOKEN=<your-token>
»Configure Consul secrets
Consul Service on HCP is secure by default. This means that client agents will need to be configured with the gossip encryption key, the Consul CA cert, and a bootstrap ACL token. All three of these secrets will need to be stored in the AWS Secrets Manager so that they can be referenced and retrieved during ECS service deployment.
From the unzipped client config directory, base64 encode the ca.pem
file
and store it into an environment variable named CONSUL_CA_PEM
. Encoding the Consul
CA cert allows AWS Secrets Manager to inject the secret into the ECS task.
$ export CONSUL_CA_PEM=$(cat ca.pem | base64)
Extract the gossip encryption key from the client configuration.
$ export CONSUL_GOSSIP_ENCRYPT=$(jq -r '.encrypt' client_config.json)
Extract the private Consul address from client configuration.
$ export CONSUL_HTTP_ADDR=$(jq -r '.retry_join[0]' client_config.json)
Extract the Consul datacenter from the client configuration.
$ export CONSUL_DATACENTER=$(jq -r '.datacenter' client_config.json)
You will write a here document for JSON files throughout this tutorial. The here document will interpolate the environment variables you set in your shell as part of this tutorial into the defined string values.
Write a here document that evaluates the CONSUL_HTTP_ADDR
, CONSUL_HTTP_TOKEN
,
CONSUL_CA_PEM
, and CONSUL_GOSSIP_ENCRYPT
environment variables to a file
named consul-client-secret.json
for the Consul certificate
authority, gossip encrypt key, server endpoint, and token.
$ cat > consul-client-secret.json << EOF
{
"retry_join": "${CONSUL_HTTP_ADDR}",
"token": "${CONSUL_HTTP_TOKEN}",
"certificate": "${CONSUL_CA_PEM}",
"encrypt_key": "${CONSUL_GOSSIP_ENCRYPT}"
}
EOF
Verify that that the JSON configuration in consul-client-secret.json
contains
the Consul token, CA, HTTP address, and gossip encryption key in plaintext.
$ cat consul-client-secret.json
{
"retry_join": "...TRUNCATED....aws.hashicorp.cloud",
"token": "5acbf8f2-a271-4583-b373-a30c2a41406",
"certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0...TRUNCATED...",
"encrypt_key": "lkxJc4+KXP...TRUNCATED..."
}
Using the AWS CLI, create a secret named consul
and set the value
to read from consul-client-secret.json
. Save the ARN of the secret into
CONSUL_CLIENT_SECRET_ARN
.
$ export CONSUL_CLIENT_SECRET_ARN=$(aws secretsmanager create-secret --name consul \
--secret-string file://consul-client-secret.json --query 'ARN' --output text)
»Create the AWS IAM Role for Consul client ECS tasks
In order for Amazon ECS tasks to access AWS Secrets Manager, you need to create and assign AWS IAM roles that allow the task to retrieve a specific secret. This is done with a task role. The client must use a task role to ensure they cannot access any secrets other than the client secret.
For configuration such as AWS IAM roles or ECS task definitions in this tutorial, write a here document to a file. The here document will interpolate the environment variables you set in your shell as part of this tutorial into the defined string values. You will set environment variables that capture identifiers for the ECS cluster, security group, networking, task definitions, and load balancers. After writing the here document to a file, you can reference the file with the AWS CLI.
Write a here document for an AWS IAM policy to a file named ecs-trusted-entity.json
.
The policy allows the ECS task to assume a role.
$ cat > ecs-trusted-entity.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
Write a here document which interpolates the CONSUL_CLIENT_SECRET_ARN
environment
variable into a file named consul-client-policy.json
. This policy allows an AWS IAM
role to access the secrets for the Consul clients in AWS Secrets Manager.
$ cat > consul-client-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"kms:Decrypt"
],
"Resource": [
"${CONSUL_CLIENT_SECRET_ARN}"
]
}
]
}
EOF
Create an AWS IAM role that uses the trusted entity policy in ecs-trusted-entity.json
.
$ export CONSUL_CLIENT_ROLE_ARN=$(aws iam create-role --role-name Consul-Client-Role \
--assume-role-policy-document file://ecs-trusted-entity.json \
--query 'Role.Arn' --output text)
Attach the consul-client-policy.json
as an embedded policy to the role.
$ aws iam put-role-policy --role-name Consul-Client-Role \
--policy-name Consul-Client-Policy --policy-document file://consul-client-policy.json
»Create the ECS task definition for Consul clients
After setting up the secrets and enabling ECS tasks access to read them, you can
create the task definitions for Consul clients. You must add the role and secret
ARNs to the task definitions for the Consul clients to run properly.
Use the consul-ecs container to start the client
with a few required environment variables. The client runs in host networking mode so that
it can be deployed as one per ECS container instance. The Consul ACL token for registering client
will be set as the CONSUL_HTTP_TOKEN
environment variable within the sidecar.
Write a here document to a file named consul-client-definition.json
. The here document
outlines an ECS task definition and interpolates the CONSUL_CLIENT_ROLE_ARN
,
CONSUL_DATACENTER
, and CONSUL_CLIENT_SECRET_ARN
environment variables into the JSON.
$ cat > consul-client-definition.json << EOF
{
"executionRoleArn": "${CONSUL_CLIENT_ROLE_ARN}",
"containerDefinitions": [
{
"portMappings": [
{
"hostPort": 8301,
"protocol": "tcp",
"containerPort": 8301
},
{
"hostPort": 8301,
"protocol": "udp",
"containerPort": 8301
},
{
"hostPort": 8302,
"protocol": "tcp",
"containerPort": 8302
},
{
"hostPort": 8300,
"protocol": "tcp",
"containerPort": 8300
},
{
"hostPort": 8600,
"protocol": "tcp",
"containerPort": 8600
},
{
"hostPort": 8600,
"protocol": "udp",
"containerPort": 8600
},
{
"hostPort": 8501,
"protocol": "tcp",
"containerPort": 8501
},
{
"hostPort": 8502,
"protocol": "tcp",
"containerPort": 8502
}
],
"cpu": 10,
"environment": [
{
"name": "CONSUL_DATACENTER",
"value": "${CONSUL_DATACENTER}"
},
{
"name": "CONSUL_CLIENT",
"value": "true"
}
],
"secrets": [
{
"valueFrom": "${CONSUL_CLIENT_SECRET_ARN}:retry_join::",
"name": "CONSUL_HTTP_ADDR"
},
{
"valueFrom": "${CONSUL_CLIENT_SECRET_ARN}:token::",
"name": "CONSUL_HTTP_TOKEN"
},
{
"valueFrom": "${CONSUL_CLIENT_SECRET_ARN}:certificate::",
"name": "CONSUL_CA_PEM"
},
{
"valueFrom": "${CONSUL_CLIENT_SECRET_ARN}:encrypt_key::",
"name": "CONSUL_GOSSIP_ENCRYPT"
}
],
"memory": 100,
"image": "joatmon08/consul-ecs:v1.9.3-v1.16.0",
"name": "consul-client"
}
],
"taskRoleArn": "${CONSUL_CLIENT_ROLE_ARN}",
"family": "consul-client",
"requiresCompatibilities": [
"EC2"
],
"networkMode": "host"
}
EOF
Register the task definition in consul-client-definition.json
.
$ export CONSUL_CLIENT_TASK_DEFINITION_ARN=$(aws ecs register-task-definition \
--cli-input-json file://consul-client-definition.json \
--query 'taskDefinition.taskDefinitionArn' --output text)
»Deploy the Consul clients
Now, deploy the Consul clients to ECS with the DAEMON
scheduling strategy.
This will deploy one Consul client per ECS container host in the cluster.
$ aws ecs create-service \
--cluster ${ECS_CLUSTER_NAME} \
--service-name consul \
--task-definition ${CONSUL_CLIENT_TASK_DEFINITION_ARN} \
--scheduling-strategy DAEMON
Verify that the Consul clients have deployed successfully by checking that the desired task count matches the running count.
$ aws ecs describe-services --services consul --cluster ${ECS_CLUSTER_NAME}
{
"services": [
{
...TRUNCATED...
"desiredCount": 3,
"runningCount": 3,
"pendingCount": 0,
...TRUNCATED...
],
"failures": []
}
If you deployed HCP Consul with a public endpoint, copy the public URL and set to the
CONSUL_HTTP_ADDR
environment variable.
$ export CONSUL_HTTP_ADDR=<hcp-consul-public-url>
Verify that the Consul clients in ECS are registered with HCP Consul.
$ consul members
Node Address Status Type Build Protocol DC Segment
ip-172-25-16-99 172.25.16.99:8301 alive server 1.9.3+ent 2 dc1 <all>
ip-172-25-18-75 172.25.18.75:8301 alive server 1.9.3+ent 2 dc1 <all>
ip-172-25-21-138 172.25.21.138:8301 alive server 1.9.3+ent 2 dc1 <all>
ip-10-0-1-180.us-west-2.compute.internal 10.0.1.180:8301 alive client 1.9.3 2 dc1 <default>
ip-10-0-2-120.us-west-2.compute.internal 10.0.2.120:8301 alive client 1.9.3 2 dc1 <default>
ip-10-0-3-109.us-west-2.compute.internal 10.0.3.109:8301 alive client 1.9.3 2 dc1 <default>
»Configure secrets for an example workload
Now that you have deployed the Consul clients, deploy an application workload. This tutorial uses the Counting demo application.
Create a Consul ACL token that will represent the counting service's identity
in Consul and use jq
to extract the token's value stored in SecretID
.
$ export COUNTING_ACL_TOKEN=$(consul acl token create -service-identity counting -format json \
| jq -r '.SecretID')
Create a Consul ACL token that will represent the dashboard service's identity
in Consul and use jq
to extract the token's value stored in SecretID
.
$ export DASHBOARD_ACL_TOKEN=$(consul acl token create -service-identity dashboard -format json \
| jq -r '.SecretID')
Amazon ECS needs access to the ACL tokens as part of the task definition. As a result, you must store the service's ACL tokens into a secrets manager, in this case AWS Secrets Manager.
Using the AWS CLI, create a secret named counting
and set the value
to the COUNTING_ACL_TOKEN
environment variable.
$ export COUNTING_SECRET_ARN=$(aws secretsmanager create-secret --name counting \
--secret-string ${COUNTING_ACL_TOKEN} --query 'ARN' --output text)
Using the AWS CLI, create a secret named dashboard
and set the value
to the DASHBOARD_ACL_TOKEN
environment variable.
$ export DASHBOARD_SECRET_ARN=$(aws secretsmanager create-secret --name dashboard \
--secret-string ${DASHBOARD_ACL_TOKEN} --query 'ARN' --output text)
»Configure AWS IAM roles for an example workload
In order for Amazon ECS tasks to access AWS Secrets Manager, you must create AWS IAM roles that allow the task to retrieve a specific secret. This is done with a task role. Each service must be configured with a different task role to ensure they can only access their own Consul ACL token.
Write a here document for an AWS IAM policy to a file named ecs-trusted-entity.json
.
The policy allows the ECS task to assume a role.
$ cat > ecs-trusted-entity.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
Write a here document which evaluates the COUNTING_SECRET_ARN
environment
variable and saves it in a file named counting-policy.json
. This policy
allows an AWS IAM role to access the secrets for the counting service in
AWS Secrets Manager.
$ cat > counting-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"kms:Decrypt"
],
"Resource": [
"${COUNTING_SECRET_ARN}"
]
}
]
}
EOF
Write a here document which evaluates the DASHBOARD_SECRET_ARN
environment
variable and saves it in a file named dashboard-policy.json
. This policy
allows an AWS IAM role to access the secrets for the dashboard service in
AWS Secrets Manager.
$ cat > dashboard-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"kms:Decrypt"
],
"Resource": [
"${DASHBOARD_SECRET_ARN}"
]
}
]
}
EOF
Create an AWS IAM role for counting that uses the trusted entity policy
in ecs-trusted-entity.json
$ export COUNTING_ROLE_ARN=$(aws iam create-role --role-name Counting-Service-Role \
--assume-role-policy-document file://ecs-trusted-entity.json \
--query 'Role.Arn' --output text)
Attach the counting-policy.json
as an embedded policy.
$ aws iam put-role-policy --role-name Counting-Service-Role \
--policy-name Counting-Service-Policy --policy-document file://counting-policy.json
Create an AWS IAM role for dashboard that uses the trusted entity policy
in ecs-trusted-entity.json
$ export DASHBOARD_ROLE_ARN=$(aws iam create-role --role-name Dashboard-Service-Role \
--assume-role-policy-document file://ecs-trusted-entity.json \
--query 'Role.Arn' --output text)
Attach the dashboard-policy.json
as an embedded policy.
$ aws iam put-role-policy --role-name Dashboard-Service-Role \
--policy-name Dashboard-Service-Policy --policy-document file://dashboard-policy.json
»Create the ECS task definitions for an example workload
After setting up the secrets and enabling the ECS task's access to read them, you can create the task definitions for the counting and dashboard services. You must add the role and secret ARNs to the task definitions for the counting and dashboard services in order to run them.
The counting service needs two containers, one for the counting service on port 9001 and
another for the counting service's sidecar listening on port 21000. The Consul ACL token
for registering the counting service will be set as the CONSUL_HTTP_TOKEN
environment
variable within the sidecar.
Write a here document to a file named counting-definition.json
. The here document
outlines an ECS task definition and interpolates the COUNTING_ROLE_ARN
and
COUNTING_SECRET_ARN
environment variables into the JSON.
$ cat > counting-definition.json << EOF
{
"executionRoleArn": "${COUNTING_ROLE_ARN}",
"containerDefinitions": [
{
"portMappings": [
{
"protocol": "tcp",
"containerPort": 9001
}
],
"cpu": 10,
"memory": 300,
"image": "hashicorp/counting-service:0.0.2",
"name": "counting"
},
{
"portMappings": [
{
"protocol": "tcp",
"containerPort": 21000
}
],
"cpu": 10,
"environment": [
{
"name": "CONSUL_PROXY",
"value": "true"
},
{
"name": "SERVICE_NAME",
"value": "counting"
},
{
"name": "SERVICE_PORT",
"value": "9001"
},
{
"name": "SERVICE_HEALTH_CHECK_PATH",
"value": "/health"
}
],
"secrets": [
{
"valueFrom": "${COUNTING_SECRET_ARN}",
"name": "CONSUL_HTTP_TOKEN"
}
],
"memory": 100,
"image": "joatmon08/consul-ecs:v1.9.3-v1.16.0",
"name": "counting-sidecar-proxy"
}
],
"taskRoleArn": "${COUNTING_ROLE_ARN}",
"family": "counting",
"requiresCompatibilities": [
"EC2"
],
"networkMode": "awsvpc"
}
EOF
Register the task definition in counting-definition.json
.
$ export COUNTING_TASK_DEFINITION_ARN=$(aws ecs register-task-definition \
--cli-input-json file://counting-definition.json \
--query 'taskDefinition.taskDefinitionArn' --output text)
The dashboard service needs two containers, one for the dashboard service on port 9002
and another for the dashboard service's sidecar listening on port 21000. The Consul
ACL token for registering the dashboard service will be set as the CONSUL_HTTP_TOKEN
environment variable within the sidecar.
The main difference between the dashboard and counting services is that dashboard accesses
the upstream counting service endpoint. Configure this by specifying the upstream
using the CONSUL_SERVICE_UPSTREAMS
environment variable. The upstream configuration must be
a JSON blob with no spaces and escaped double quotes.
Write a here document to a file named dashboard-definition.json
. The here document
outlines an ECS task definition and interpolates the DASHBOARD_ROLE_ARN
and
DASHBOARD_SECRET_ARN
environment variables into the JSON.
$ cat > dashboard-definition.json << EOF
{
"executionRoleArn": "${DASHBOARD_ROLE_ARN}",
"containerDefinitions": [
{
"portMappings": [
{
"hostPort": 9002,
"protocol": "tcp",
"containerPort": 9002
}
],
"cpu": 10,
"environment": [
{
"name": "COUNTING_SERVICE_URL",
"value": "http://localhost:9001"
}
],
"memory": 300,
"image": "hashicorp/dashboard-service:0.0.4",
"name": "dashboard"
},
{
"portMappings": [
{
"hostPort": 21000,
"protocol": "tcp",
"containerPort": 21000
}
],
"cpu": 10,
"environment": [
{
"name": "CONSUL_PROXY",
"value": "true"
},
{
"name": "SERVICE_NAME",
"value": "dashboard"
},
{
"name": "SERVICE_PORT",
"value": "9002"
},
{
"name": "SERVICE_HEALTH_CHECK_PATH",
"value": "/health"
},
{
"name": "CONSUL_SERVICE_UPSTREAMS",
"value": "[{\"destination_name\":\"counting\",\"local_bind_port\":9001}]"
}
],
"secrets": [
{
"valueFrom": "${DASHBOARD_SECRET_ARN}",
"name": "CONSUL_HTTP_TOKEN"
}
],
"memory": 100,
"image": "joatmon08/consul-ecs:v1.9.3-v1.16.0",
"name": "dashboard-sidecar-proxy"
}
],
"taskRoleArn": "${DASHBOARD_ROLE_ARN}",
"family": "dashboard",
"requiresCompatibilities": [
"EC2"
],
"networkMode": "awsvpc"
}
EOF
Register the task definition in dashboard-definition.json
.
$ export DASHBOARD_TASK_DEFINITION_ARN=$(aws ecs register-task-definition \
--cli-input-json file://dashboard-definition.json \
--query 'taskDefinition.taskDefinitionArn' --output text)
»Deploy an example workload
Before deploying the counting service, retrieve the VPC ID, subnet IDs,
and security group IDs for your EC2 container instances. You need this
information because the counting service runs in awsvpc
mode,
which means you must specify the networking configuration.
Retrieve all of the subnet ids associated with ECS container instances.
$ export ECS_SUBNET_IDS=$(aws ecs list-attributes --target-type container-instance \
--attribute-name ecs.subnet-id --cluster ${ECS_CLUSTER_NAME} \
--query "attributes[*].value" --output json)
Retrieve the VPC ID for the cluster subnets.
$ export ECS_VPC_ID=$(aws ecs list-attributes --target-type container-instance \
--attribute-name ecs.vpc-id --cluster ${ECS_CLUSTER_NAME} \
--query "attributes[0].value" --output text)
Create one instance of the counting service, and add the subnets and security groups related to ECS.
$ aws ecs create-service \
--cluster ${ECS_CLUSTER_NAME} \
--service-name counting \
--task-definition ${COUNTING_TASK_DEFINITION_ARN} \
--desired-count 1 \
--network-configuration "awsvpcConfiguration={subnets=${ECS_SUBNET_IDS},securityGroups=${ECS_SECURITY_GROUP_ID}}"
Verify that the counting service is registered with Consul using the Consul CLI.
$ consul catalog services
consul
counting
counting-sidecar-proxy
Create a security group for a load balancer. It will forward requests to the dashboard service.
$ export ALB_SECURITY_GROUP_ID=$(aws ec2 create-security-group --group-name dashboard-alb \
--description "Allow traffic to Dashboard ALB" --vpc-id ${ECS_VPC_ID} \
--query 'GroupId' --output text)
Authorize a security group rule from your development environment to the load balancer.
$ aws ec2 authorize-security-group-ingress --group-id ${ALB_SECURITY_GROUP_ID} \
--protocol tcp --port 80 --cidr $(curl -s https://checkip.amazonaws.com)/32
Remove the outbound security group since it is too permissive.
$ aws ec2 revoke-security-group-egress \
--group-id ${ALB_SECURITY_GROUP_ID} \
--ip-permissions '[{"IpProtocol": "-1", "FromPort": -1, "ToPort": -1,"IpRanges": [{"CidrIp": "0.0.0.0/0"}]}]'
Replace the egress rule to only allow traffic to the ECS cluster.
$ aws ec2 authorize-security-group-egress \
--group-id ${ALB_SECURITY_GROUP_ID} --protocol tcp \
--port 9002 --cidr $(aws ec2 describe-vpcs --vpc-id ${ECS_VPC_ID} \
--query 'Vpcs[0].CidrBlock' --output text)
Get a list of public subnets for the ECS cluster's VPC.
$ export ECS_PUBLIC_SUBNET_IDS=$(aws ec2 describe-subnets \
--filters Name=vpc-id,Values=${ECS_VPC_ID} \
--query 'Subnets[?MapPublicIpOnLaunch==`true`].SubnetId')
Create the application load balancer to forward requests to the dashboard service.
$ export ALB_ARN=$(aws elbv2 create-load-balancer --name dashboard-alb \
--subnets ${ECS_PUBLIC_SUBNET_IDS} \
--security-groups ${ALB_SECURITY_GROUP_ID} ${ECS_SECURITY_GROUP_ID} \
--query 'LoadBalancers[0].LoadBalancerArn' --output text)
Create a target group for the dashboard service, which will route traffic to dashboard.
The target-type
must be ip
due to awsvpc
mode.
$ export DASHBOARD_TARGET_GROUP_ARN=$(aws elbv2 create-target-group \
--name dashboard --protocol HTTP --port 9002 \
--target-type ip --vpc-id ${ECS_VPC_ID} \
--query 'TargetGroups[0].TargetGroupArn' --output text)
Create a listener that forwards traffic to the target group. Make sure you have defined
the ${ALB_ARN}
with the ARN of the application load balancer.
$ export LISTENER_ARN=$(aws elbv2 create-listener \
--load-balancer-arn ${ALB_ARN} \
--protocol HTTP \
--port 80 \
--default-actions Type=forward,TargetGroupArn=${DASHBOARD_TARGET_GROUP_ARN} \
--query 'Listeners[0].ListenerArn' --output text)
Create one instance of the dashboard service and add the subnets and security groups related to ECS.
$ aws ecs create-service \
--cluster ${ECS_CLUSTER_NAME} \
--service-name dashboard \
--task-definition ${DASHBOARD_TASK_DEFINITION_ARN} \
--load-balancer targetGroupArn=${DASHBOARD_TARGET_GROUP_ARN},containerName=dashboard,containerPort=9002 \
--desired-count 1 \
--network-configuration "awsvpcConfiguration={subnets=${ECS_SUBNET_IDS},securityGroups=${ECS_SECURITY_GROUP_ID}}"
Verify that the dashboard service registered with Consul using the Consul CLI.
$ consul catalog services
consul
counting
counting-sidecar-proxy
dashboard
dashboard-sidecar-proxy
»Create the necessary intentions
Since HCP Consul on AWS is secure by default, the datacenter is created with a "default deny" intention in place. This means that, by default, no services can interact with each other until an operator explicitly allows them to do so by creating intentions for each inter-service operations they wish to allow.
Issue the following command to create an intention to allow dashboard to connect to counting.
$ consul intention create dashboard counting
Created: dashboard => counting (allow)
»Access the dashboard service
Get the endpoint for your load balancer.
$ aws elbv2 describe-load-balancers --load-balancer-arns ${ALB_ARN} \
--query 'LoadBalancers[0].DNSName' --output text
dashboard-alb-1257206425.us-west-2.elb.amazonaws.com
Verify you have successfully deployed the application by visiting the load balancer's DNS name in a browser tab. This validates that Consul service discovery is working, because the dashboard service resolves to the counting service. This also validates that Consul service mesh is working, because the intentions that were created are allowing services to interact with one another.
»Next Steps
In this tutorial, you deployed a set of Consul clients to an Amazon ECS cluster using EC2 instances and deployed a demo application. To learn more, complete the following tutorials.
If you encounter any issues, please contact the HCP team at support.hashicorp.com.