How to implement load balancing with NGINX

Updated: January 19, 2024 By: Guest Contributor Post a comment

Introduction to Load Balancing

Load balancing is a critical strategy for distributing incoming network traffic across a group of backend servers, known as a server farm or server pool. In this tutorial, we will discuss how to implement load balancing using NGINX, an open-source web server that can also be used as a reverse proxy, load balancer, and HTTP cache.

Understanding Load Balancing Methods

NGINX supports several load balancing methods:

  • Round Robin – Requests are distributed evenly across the servers.
  • Least Connections – A new request is sent to the server with the fewest active connections.
  • IP Hash – The client’s IP address is used to determine which server receives the request.

Basic NGINX Load Balancer Configuration

http {
    upstream backend {
        server backend1.example.com;
        server backend2.example.com;
        server backend3.example.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://backend;
        }
    }
}

This configuration sets up a basic round-robin load balancer that proxies incoming HTTP traffic to three backend servers.

Load Balancing with Weighted Round Robin

upstream backend {
    server backend1.example.com weight=3;
    server backend2.example.com weight=2;
    server backend3.example.com weight=1;
}

The weight parameter specifies the ratio of traffic that each server will receive.

Least Connections Load Balancing

upstream backend {
    least_conn;
    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com;
}

With least_conn, NGINX directs traffic to the server with the fewest active connections.

IP Hash Load Balancing

upstream backend {
    ip_hash;
    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com;
}

IP hash configuration uses the client’s IP address to determine which server it connects to, which can be useful for session persistence.

Using SSL with Load Balancing

To implement SSL termination at the load balancer level, use the following configuration:

server {
    listen 443 ssl;
    ssl_certificate /etc/nginx/ssl/nginx.crt;
    ssl_certificate_key /etc/nginx/ssl/nginx.key;

    location / {
        proxy_pass http://backend;
    }
}

This snippet assumes that you have already generated an SSL certificate and key, and placed them in the appropriate directory.

Advanced Health Checks

NGINX Plus offers advanced health check functionality that automatically removes unhealthy servers from the pool. For regular NGINX, similar functionality can be scripted.

See also: Health Checks in NGINX: The Complete Guide/

Dynamic Upstream

For an environment where backend servers change frequently, you might want to use the NGINX Plus or third-party modules which allow dynamic reconfiguration without restarting NGINX.

Conclusion

Efficient load balancing ensures high availability and reliability by distributing the workload evenly across multiple servers. NGINX provides a powerful and flexible load balancing solution that can be adapted to a wide range of scenarios. Following the steps in this tutorial, you can implement a basic to advanced load balancer using NGINX’s variety of methods to suit your specific requirements.