Guest Post: Beginner’s Guide to Terraform AWS Compute (Part 2)

In this post, we’ll go over how to create Terraform EC2 instances into a VPC, and how to make them highly available by creating Terraform AWS load balancers.


Welcome back to our series on Terraform AWS.

In case you missed our previous post, we went over the basics of spinning up your infrastructure using Terraform AWS. We started off by creating a virtual machine--aka a Terraform EC2 instance--and allowed HTTP access to the same by defining a security group.

In this post, we’ll go over how to create Terraform EC2 instances into a VPC, and how to make them highly available by creating Terraform AWS load balancers.

This post discusses how to create EC2 instances into a Terraform VPC and make them highly available by creating load balancers. If you have not read the first part, it is highly recommended to read that first before proceeding towards the steps here.

Note: This is a guest post by Sumeet Ninawe from Let’s Do Tech. Sumeet has multi-cloud management platform experience where he used Terraform for the orchestration of customer deployments across major cloud providers like AWS, MS Azure and GCP. You can find him on Github, Twitter, and his website.

To follow along on Github, check out this link: Terraform AWS Compute: Github Repo.

Interested in learning more about Terraform? Join our Slack community to connect with DevOps experts and continue the conversation.

In this article, we’ll go over:

  • Step 1: Creating a Terraform VPC for AWS
  • Step 2: Provisioning- How to Install a Web Server
  • Step 3: What to Do If a Terraform EC2 Instance Goes Down
  • Step 4: How to Test Availability

Step 1: Creating Terraform VPC for AWS
link icon

It’s best practice to create our resources within a Terraform VPC. For the example we’re working with, let’s create a basic Terraform VPC and use the same. We’ll make use of the VPC module of Terraform.

To do so, add the following code to your file:

module "vpx" {
  source = "terraform-aws-modules/vpc/aws"

  name = "my-vpc"
  cidr = ""

  azs            = var.azs
  public_subnets = var.subnet_cidr

Add a couple of the variables used above to your file, with the below values in variables.tfvars file:

azs = ["us-east-1b", "us-east-1c"]
subnet_cidr = ["", ""]

Here, we’re creating a Terraform VPC with the given CIDR range and within that VPC, we’re creating 2 subnets in 2 availability zones.

Initialize Terraform again in this directory since we’ve introduced a new Terraform VPC module in our code. Perform the commands “plan” and “apply,” and verify that a VPC by name “my-vpc” has been created.

For more information on AWS VPCs, check out this post.

Step 2:
Provisioning - How to Install a Web Serverlink icon

In this step, we’re going to install a Nginx web server using the user_data attribute. User data is used to run shell scripts when the server is booted for the first time. While creating a Terraform EC2 instance in AWS console, we can provide user_data in the form of text in the “Configure Instance Details” step.

If you’re looking for a brief review of Terraform provisioning, here’s a guide.

In our code, we pass user_data to the EC2 instance, by adding user_data attribute. We provide the user_data script in a different file. Create a new file name “install.tpl” in the same directory and add the below script to it.

apt-get update -y
apt install nginx -y
systemctl start nginx

This script updates the repositories, installs Nginx, and runs it. In our file, we create a data source to read the contents of this file to be used in the aws_instance resource block. Add the below data source in file:

data "template_file" "user_data" {
  template = file("install.tpl")

Also, add the user_data attribute to aws_instance resource block to use this data source script.

user_data = data.template_file.user_data.rendered

If you’ve followed along with our previous post, we created a Security Group where we opened the HTTP access to our instance. We need to make one change to our Security Group configuration - associate it with our VPC. Add the below line of code to your Security Group resource block.

vpc_id = module.vpc.vpc_id

Refer to the reference code on Github to make sure your configurations are correct.

Run the command Terraform “plan” and “apply” and once successful, try to access the Terraform EC2 instance via HTTP using the Public IP address. You should now be able to see the Nginx home page in your browser.

Step 3: What to Do If a Terraform EC2 Instance Goes Downlink icon

Currently, we have one EC2 instance running in one AZ. Let’s say that, for some reason, the instance goes down. In this case, all the traffic being served by this instance will be dropped. It would be desirable to have a backup instance to handle the traffic, perhaps in a different AZ.

The answer to this problem is high availability. It’s a big term, meaning that a lot of aspects of business continuity are involved. But in our example, let’s try to implement a simple rule of load balancing. In this step, we create an instance in each AZ, and a load balancer to route traffic (HTTP) to both instances.

Modify the aws_instance resource block as below.

resource "aws_instance" "compute_nodes" {
  ami                      = var.ami
  instance_type            = var.instance_type
  count                    = length(var.azs)
  security_groups          = []
  subnet_id                = element(module.vpc.public_subnets, count.index)

  user_data = data.template_file.user_data.rendered
   tags = {
     Name = "my-compute-node-${count.index}"

Note that we’ve introduced a meta-argument “count”. The value of count is based on the number of AZs. In our case, we’re dealing with 2 AZ in the us-east-1 region. Therefore, 2 VMs will be created. Similarly, we want to spread these instances in subnets in different AZs. Thus, we’re using the element() function to select subnets based on the index of the count. Also, notice that we’re also naming our Terraform EC2 instances dynamically based on the count index.

Next, let’s set up a load balancer. Creating a load balancer by adding the “aws_lb” block below in your file. Attributes in this block are quite straightforward.

resource "aws_lb" "alb" {
 name               = "my-alb"
 internal           = false
 load_balancer_type = "application"
 security_groups    = []
 subnets            = module.vpc.public_subnets

This resource block only creates the load balancer but does not add the target group or listeners. To create a target group, we use “aws_lb_target_group” and “aws_lb_target_group_attachment” resource blocks as below.

resource "aws_lb_target_group" "alb_tg" {
 name     = "my-target-group
 port     = 80
 protocol = "HTTP"
 vpc_id   = module.vpc.vpc_id

resource "aws_lb_target_group_attachment" "target_registration" {
  count            = length(var.azs)
  target_group_arn = aws_lb_target_group.alb_tg.arn
  target_id        = aws_instance.compute_nodes[count.index].id
  port             = 80

Terraform EC2 instances are registered as targets in the target group. Load balancers route the request traffic to the target group via a listener. The target group is responsible for keeping the instance's health in check. In the above Terraform AWS code, we’re creating a target group and registering our instances as targets. Add this code to the file.

Finally, we need to create a listener using the below code.

resource "aws_lb_listener" "alb_listener" {
 load_balancer_arn = aws_lb.alb.arn
 port              = "80"
 protocol          = "HTTP"
  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.alb_tg.arn

Lastly, let’s output the load balancer’s DNS name. Add the below lines to file.

output "lb_dns" {
  value = aws_lb.alb.dns_name

Save the files and run the commands Terraform “plan” and “apply”. When all of the resources are created, test if you can access the Nginx home page by accessing the load balancer’s DNS name. If you’ve followed along, it should be accessible.

Step 4: How to Test Availabilitylink icon

The entire reason for creating a load balancer was to improve availability in case of a failure of one machine. Let’s test the same by stopping one of the Terraform EC2 instances. Navigate to EC2 instances in your AWS console and stop one of the VMs. Try to access the load balancer’s DNS via browser, and it should still work. Do a couple of tests and play around a bit. Once done, don’t forget to run the command Terraform “destroy”.

We hope you’re finding our blog series on Terraform AWS to be helpful. Let us know your thoughts.

We recently published an article on our new tool, InfraSketch. Check out the article (a Faun exclusive) to learn more.