Terraform & AWS: How to deploy a load balancer

Updated: February 3, 2024 By: Guest Contributor Post a comment

Introduction

In this tutorial, we’ll explore how to deploy a load balancer in AWS using Terraform. Terraform, an open-source IaC (Infrastructure as Code) tool created by HashiCorp, empowers you to build, change, and version cloud infrastructure seamlessly. This guide assumes you have basic knowledge of AWS and Terraform. Let’s dive in with multiple code examples, from basic to advanced configurations, including outputs.

Before you begin, make sure you have the following prerequisites:

  • Terraform installed on your system
  • An AWS account
  • The AWS CLI installed and configured with your credentials

Step-by-Step Guide

Step 1: Set up Your Terraform Configuration

Start by creating a new directory for your project and navigating into it. Create a file named main.tf.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }
}

provider "aws" {
  region = "your-region"
}

This configuration specifies the AWS provider and sets the region where you’ll deploy your resources.

Step 2: Define Your Elastic Load Balancer (ELB)

Create another file named elb.tf and add the following:

resource "aws_elb" "example" {
  name = "terraform-example-elb"
  listener {
    instance_port     = 80
    instance_protocol = "HTTP"
    lb_port           = 80
    lb_protocol       = "HTTP"
  }

  health_check {
    target              = "HTTP:80/"
    interval            = 30
    timeout             = 5
    healthy_threshold   = 2
    unhealthy_threshold = 2
  }

  instances                   = []
  cross_zone_load_balancing  = true
  idle_timeout                = 60
}

This code snippet creates a basic ELB configured to listen on HTTP port 80 and defines health check parameters.

Step 3: Initialize Terraform and Apply Your Configuration

In your project directory, run the following commands:

$ terraform init
$ terraform apply

After running terraform apply, Terraform prompts you to confirm the action. Type yes, and Terraform will create your ELB in AWS. It might take a few minutes. Once done, Terraform displays the outputs, indicating the success of your deployment.

Expanding Your Setup

To attach EC2 instances to your ELB, modify the instances attribute in elb.tf, specifying the instance IDs. For a more dynamic setup, use Terraform to create EC2 instances and reference them automatically.

Advanced Configuration: Implementing Auto Scaling

Load balancing works best with auto-scaling, ensuring your application can handle variations in traffic. Add a new file named autoscale.tf and include:

resource "aws_launch_configuration" "example" {
  name          = "terraform-example-launch-configuration"
  image_id      = "ami-123456"
  instance_type = "t2.micro"
}

resource "aws_autoscaling_group" "example" {
  launch_configuration    = aws_launch_configuration.example.id
  min_size                = 1
  max_size                = 3
  health_check_type       = "ELB"
  load_balancers          = [aws_elb.example.name]
  vpc_zone_identifier     = ["subnet-123456"]
}

This sets up an auto-scaling group that will adjust the number of instances based on load, integrated with the ELB you previously created.

Accessing Your Load Balancer

Your ELB’s DNS name is available in the Terraform outputs. Use this to access your application through the load balancer. Remember, your instances need to serve content on the port specified in your ELB configuration.

Conclusion

By following this guide, you’ve learned how to deploy and configure an AWS load balancer using Terraform. This setup forms a solid foundation for managing your application’s scalability and availability. Explore further by integrating more AWS services and Terraform modules to enhance your infrastructure.