NGINX upstream module: Explained with examples

Updated: January 20, 2024 By: Guest Contributor Post a comment

Introduction

The NGINX upstream module is a pivotal feature within NGINX, an incredibly popular web server and reverse proxy tool. In the realm of server management and configuration, understanding how to leverage the upstream module can enable administrators to efficiently manage traffic to backend servers, implement load balancing, and ensure high availability.

The Upstream Module

The upstream module in NGINX allows you to define a group of servers that can handle requests for a certain service. It’s commonly used for load balancing across multiple application instances. Each upstream cluster can manage requests to a specified service and balance them according to a set of predefined rules.

Let’s start with understanding how to configure a simple upstream context and proceed with more advanced examples as we explore the versatility of the NGINX upstream module.

Basic Configuration of Upstream

Configuring a basic upstream block involves defining a few parameters. Here’s a simple example:

http {
    upstream myapp1 {
        server backend1.example.com;
        server backend2.example.com;
    }

    server {
        location / {
            proxy_pass http://myapp1;
        }
    }
}

In the example above, we define an upstream block named myapp1 with two backend servers. The location block inside server context will pass all requests coming into the root URL to the servers defined in our upstream context.

Round-Robin Load Balancing

NGINX uses the round-robin method by default to distribute incoming traffic across all servers defined in the upstream block. Here’s how it looks in the configuration file:

http {
    upstream myapp2 {
        server backend1.example.com;
        server backend2.example.com;
    }

    server {
        location / {
            proxy_pass http://myapp2;
        }
    }
}

In the round-robin approach, NGINX forwards the first request to the first server, the second request to the second server, and then starts over with the third request going to the first server again, ensuring an even distribution.

Weighted Load Balancing

If one of your servers can handle more load, you can assign weights to your servers within the upstream block like this:

http {
    upstream myapp3 {
        server backend1.example.com weight=3;
        server backend2.example.com weight=1;
    }

    server {
        location / {
            proxy_pass http://myapp3;
        }
    }
}

The server with the higher weight will receive more requests. In this case, backend1 will handle three times more requests than backend2.

Least Connections Load Balancing

NGINX can also balance traffic based on the least number of active connections to the servers. It’s useful when requests demand significantly different amounts of server resources. Here is an example:

http {
    upstream myapp4 {
        least_conn;
        server backend1.example.com;
        server backend2.example.com;
    }

    server {
        location / {
            proxy_pass http://myapp4;
        }
    }
}

By adding the least_conn directive, we instruct NGINX to forward new connections to the server with the fewest active connections at the time.

Health Checks and Server Failures

NGINX allows you to mark servers as down or backup in case of failures or maintenance windows:

http {
    upstream myapp5 {
        server backend1.example.com down;
        server backend2.example.com backup;
        server backend3.example.com;
    }

    server {
        location / {
            proxy_pass http://myapp5;
        }
    }
}

In this configuration, requests will not be sent to backend1 because it is marked down. backend2 is set as a backup and will only receive traffic if all other servers are unavailable. All normal traffic is routed to backend3.

Advanced Health Checks with ngx_http_upstream_module

Nginx Plus offers advanced health check features that can be employed to monitor the backend servers. However, for the open-source version of NGINX, advanced health check can be achieved using third-party modules like the ngx_http_upstream_module.

With the ngx_http_upstream_module, you can configure active health checks that periodically test the status of the upstream servers and remove unhealthy ones from the rotation.

IP Hash Load Balancing

If you require session persistence, the IP hash method can be used:

http {
    upstream myapp6 {
        ip_hash;
        server backend1.example.com;
        server backend2.example.com;
    }

    server {
        location / {
            proxy_pass http://myapp6;
        }
    }
}

The ip_hash directive tells NGINX to route requests based on the client’s IP address. This method ensures that clients will stick to the same server for session persistence.

SSL/TLS Upstream Connections

To encrypt traffic between NGINX and your upstream servers:

http {
    upstream myapp7 {
        server backend1.example.com:443 ssl;
        server backend2.example.com:443 ssl;
    }

    server {
        location / {
            proxy_pass https://myapp7;
        }
    }
}

The inclusion of ssl in the server definitions ensures that traffic to the backend servers is encrypted using SSL/TLS.

Conclusion

In this guide, we have traversed the fundamental aspects of the NGINX upstream module. Given the wide array of load balancing techniques and advanced configurations we’ve discussed, the module’s versatility is clear. Whether you aim to distribute load evenly across servers, ensure high availability, or implement session persistence, understanding and utilizing the upstream module is crucial for optimizing application delivery.