Virtual Event
Join us for the next HashiConf Digital October 12-15, 2020 Register for Free

Write Terraform Configuration


Manage Similar Resources With For Each

In this tutorial, you will provision a VPC, load balancer, and EC2 instances on AWS. Then you will refactor your configuration to provision multiple projects with the for_each argument and a data structure.

The for_each argument will iterate over a data structure to configure resources or modules with each item in turn. It works best when the duplicate resources need to be configured differently but share the same lifecycle.


»Apply initial configuration

Clone the example GitHub repository.

$ git clone

Change into the new directory.

$ cd learn-terraform-count-foreach

Check out the initial configuration.

$ git checkout tags/foreach-initial-configuration -b foreach-initial-configuration

The configuration in will provision a VPC with public and private subnets, a load balancer, and EC2 instances in each private subnet. The variables located in allow you to configure the VPC. For instance, the private_subnets_per_vpc variable controls the number of private subnets the configuration will create.

Initialize Terraform in this directory. Terraform will install the AWS provider and the vpc, app_security_group, lb_security_group, and elb_http modules.

$ terraform init

Once your directory has been initialized, apply the configuration, and remember to confirm with a yes.

$ terraform apply

Refactor the VPC and related configuration so that Terraform can deploy multiple projects at the same time, each with their own VPC and related resources.

$ git checkout tags/foreach-multiple-projects -b foreach-multiple-projects

»Define a map to configure each project

Define a map for project configuration in that for_each will iterate over to configure each resource.

variable project {
  description = "Map of project names to configuration."
  type        = map
  default     = {
    client-webapp = {
      public_subnets_per_vpc  = 2,
      private_subnets_per_vpc = 2,
      instances_per_subnet    = 2,
      instance_type           = "t2.micro",
      environment             = "dev"
    internal-webapp = {
      public_subnets_per_vpc  = 1,
      private_subnets_per_vpc = 1,
      instances_per_subnet    = 2,
      instance_type           = "t2.nano",
      environment             = "test"

Since the project variable includes most of the options that were configured by individual variables, comment out or remove these variables from

-variable project_name {
-  description = "Name of the project. Used in resource names and tags."
-  type        = string
-  default     = "client-webapp"
-variable environment {
-  description = "Value of the 'Environment' tag."
-  type        = string
-  default     = "dev"
-variable public_subnets_per_vpc {
-  description = "Number of public subnets. Maximum of 16."
-  type        = number
-  default     = 2
-variable private_subnets_per_vpc {
-  description = "Number of private subnets. Maximum of 16."
-  type        = number
-  default     = 2
-variable instance_type {
-  description = "Type of EC2 instance to use."
-  type        = string
-  default     = "t2.micro"
-variable instances_per_subnet {
-  description = "Number of EC2 instances in each private subnet"
-  type        = number
-  default     = 2

»Add for_each to the VPC

Now use for_each to iterate over the project map in the VPC module block of, which will create one VPC for each key/value pair in the map.

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "2.44.0"
+ for_each = var.project

# ...truncated...

Terraform will provision multiple VPCs, assigning each key/value pair in the var.project map to each.key and each.value respectively. With a list or set, each.key will be the index of the item in the collection, and each.value will be the value of the item.

In this example, the project map includes values for the number of private and public subnets in each VPC. Update the subnet configuration in the vpc module block in to use each.value to refer to these values.

  azs             = data.aws_availability_zones.available.names
- private_subnets = slice(var.private_subnet_cidr_blocks, 0, var.private_subnets_per_vpc)
- public_subnets  = slice(var.public_subnet_cidr_blocks, 0, var.public_subnets_per_vpc)
+ private_subnets = slice(var.private_subnet_cidr_blocks, 0, each.value.private_subnets_per_vpc)
+ public_subnets  = slice(var.public_subnet_cidr_blocks, 0, each.value.public_subnets_per_vpc)

Update the app_security_group module to iterate over the project variable to get the security group name, VPC ID, and CIDR blocks for each project.

module "app_security_group" {
  source  = "terraform-aws-modules/security-group/aws//modules/web"
  version = "3.12.0"
+ for_each = var.project

- name        = "web-server-sg-${var.project_name}-${var.environment}"
+ name        = "web-server-sg-${each.key}-${each.value.environment}"
  description = "Security group for web-servers with HTTP ports open within VPC"
- vpc_id      = module.vpc.vpc_id
+ vpc_id      = module.vpc[each.key].vpc_id
- ingress_cidr_blocks = module.vpc.public_subnets_cidr_blocks
+ ingress_cidr_blocks = module.vpc[each.key].public_subnets_cidr_blocks

You can differentiate between instances of resources and modules configured with for_each by using the keys of the map you use. In this example, using module.vpc[each.key].vpc_id to define the VPC means that the security group for a given project will be assigned to the corresponding VPC.

»Update the load balancer and its security group

Update the configuration for the load balancer security groups to iterate over the project variable to get their names and VPC IDs.

module "lb_security_group" {
  source  = "terraform-aws-modules/security-group/aws//modules/web"
  version = "3.12.0"

+ for_each = var.project

- name = "load-balancer-sg-${var.project_name}-${var.environment}"
+ name = "load-balancer-sg-${each.key}-${each.value.environment}"
  description = "Security group for load balancer with HTTP ports open within VPC"
- vpc_id      = module.vpc.vpc_id
+ vpc_id      = module.vpc[each.key].vpc_id
  ingress_cidr_blocks = [""]

Update the elb_http block so that each VPC’s load balancer name will also include the name of the project, the environment, and will use the corresponding security groups and subnets.

module "elb_http" {
  source  = "terraform-aws-modules/elb/aws"
  version = "2.4.0"
+ for_each = var.project
  # Comply with ELB name restrictions
- name = trimsuffix(substr(replace(join("-", ["lb", random_string.lb_id.result, var.project_name, var.environment]), "/[^a-zA-Z0-9-]/", ""), 0, 32), "-")
+ name = trimsuffix(substr(replace(join("-", ["lb", random_string.lb_id.result, each.key, each.value.environment]), "/[^a-zA-Z0-9-]/", ""), 0, 32), "-")
  internal = false
- security_groups = [module.lb_security_group.this_security_group_id]
- subnets         = module.vpc.public_subnets
+ security_groups = [module.lb_security_group[each.key].this_security_group_id]
+ subnets         = module.vpc[each.key].public_subnets

»Move EC2 instance to a module

You will also need to update the instance resource block to assign EC2 instances to each VPC. However, the block already uses count. You cannot use both count and for_each in the same block.

To solve this, you will move the aws_instance resource into a module, including the count argument, and then use for_each when referring to the module in your file. The example repository includes a module with this configuration in the modules/aws-instance directory. For a detailed example on how to move a configuration to a local module, try the Create a Terraform Module tutorial.

Remove the resource "aws_instance" "app" and data "aws_ami" "amazon_linux" blocks from your root module's file, and replace them with a reference to the aws-instance module.

module "ec2_instances" {
  source = "./modules/aws-instance"

  for_each = var.project

  instance_count     = each.value.instances_per_subnet * length(module.vpc[each.key].private_subnets)
  instance_type      = each.value.instance_type
  subnet_ids         = module.vpc[each.key].private_subnets[*]
  security_group_ids = [module.app_security_group[each.key].this_security_group_id]

  project_name = each.key
  environment  = each.value.environment

Next, replace the references to the EC2 instances in the module "elb_http" block with references to the new module.

- number_of_instances = length(
- instances           =*.id
+ number_of_instances = length(module.ec2_instances[each.key].instance_ids)
+ instances           = module.ec2_instances[each.key].instance_ids

Finally, replace the entire contents of in your root module with the following.

output public_dns_names {
  description = "Public DNS names of the load balancers for each project"
  value       = { for p in sort(keys(var.project)) : p => module.elb_http[p].this_elb_dns_name }

output vpc_arns {
  description = "ARNs of the vpcs for each project"
  value       = { for p in sort(keys(var.project)) : p => module.vpc[p].vpc_arn }

output instance_ids {
  description = "IDs of EC2 instances"
  value       = { for p in sort(keys(var.project)) : p => module.ec2_instances[p].instance_ids }

The for expressions used here will map the project names to the corresponding values in the Terraform output.

»Apply scalable configuration

Initialize the new module.

$ terraform init

Now apply the changes. Remember to respond to the confirmation prompt with yes.

$ terraform apply

Terraform will list the outputs for each project.

# ...truncated...


instance_ids = {
  "client-webapp" = [
  "internal-webapp" = [
public_dns_names = {
  "client-webapp" = ""
  "internal-webapp" = ""
vpc_arns = {
  "client-webapp" = "arn:aws:ec2:us-east-2:130490850807:vpc/vpc-00bd9888322925dc2"
  "internal-webapp" = "arn:aws:ec2:us-east-2:130490850807:vpc/vpc-01aa642055624f109"

This configuration creates separate VPCs for each project defined in count and for_each allow you to create more flexible configurations, and reduce duplicate resource and module blocks.

»Clean up resources

After verifying that the projects deployed successfully, run terraform destroy to destroy them. Remember to respond to the confirmation prompt with yes.

$ terraform destroy

»Next steps

Now that you have used for_each in your configuration, explore the following resources.