Why you should use NGINX to serve your web application

Updated: January 19, 2024 By: Guest Contributor Post a comment

Introduction

When you’re developing a web application, one critical choice is deciding which web server to use. NGINX has gained incredible popularity among developers and sysadmins for serving web applications efficiently. But what accounts for NGINX’s widespread adoption, and why should you consider it over other web servers?

In this tutorial, we’ll explore what NGINX is, its benefits, and provide hands-on examples of how to use it from the basic setups to more advanced configurations to optimize your web application’s performance and scalability.

What is NGINX?

NGINX is an open-source web server that can also be used as a reverse proxy, load balancer, and HTTP cache. It was created by Igor Sysoev and released in 2004. NGINX is known for its high performance, stability, simple configuration, and low resource consumption.

Benefits of NGINX

NGINX offers several advantages over other web servers like Apache, including handling a high number of concurrent connections, reverse proxy features, load balancing, caching, and the ability to serve static content quickly. It’s this combination of performance and functionality that makes NGINX suitable for modern web applications that require quick load times and the ability to handle many simultaneous connections.

Installing NGINX

sudo apt update
sudo apt install nginx

After installing, you can start the NGINX service with:

sudo systemctl start nginx

And to ensure it starts up automatically after a reboot:

sudo systemctl enable nginx

Basic NGINX Configuration

The default configuration file for NGINX is located at /etc/nginx/nginx.conf. Basic modifications can be made here to change the server’s listening port, server name, and location directives to handle incoming requests for static content.

server {
    listen 80;
    server_name example.com;

    location / {
        root /var/www/html;
        index index.html index.htm;
    }
}

The above configuration listens on the standard web port 80 and serves content from /var/www/html. When a request comes to example.com, it serves the index.html file.

Using NGINX as a Reverse Proxy

One of NGINX’s greatest strengths is its ability to act as a reverse proxy. This means it can forward requests to another server or service, which is particularly useful when used in conjunction with an application server or microservices.

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://your_app_server:port;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

This configuration functions as a gateway that receives client requests for your application and forwards them to the specified app server.

Setting Up SSL with NGINX

Secure communication is crucial, so setting up SSL is a must. With NGINX, you can easily implement SSL to encrypt client-server communication.

server {
    listen 443 ssl;
    server_name app.example.com;

    ssl_certificate /path/to/signed_cert_plus_intermediates;
    ssl_certificate_key /path/to/private_key;
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout 5m;

    ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256...';
    ssl_protocols TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;

    location / {
        # Reverse proxy settings
        ...
    }
}

This snippet shows how to configure your server to handle HTTPS requests by specifying your certificate and private key locations, caching options, and preferred ciphers.

Load Balancing with NGINX

As your application grows, you might end up needing to balance the load across multiple servers. NGINX’s load balancing feature can distribute incoming traffic across several backend servers to even out the resource usage and increase redundancy.

http {
    upstream myapp1 {
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }

    server {
        location / {
            proxy_pass http://myapp1;
            ...
        }
    }
}

The ‘upstream’ block defines the group of servers that NGINX will balance requests between.

Advanced Caching with NGINX

NGINX can also be configured to cache responses from a web application, which can drastically decrease response times for your users. Here’s a basic example configuration that sets up caching:

http {
    proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;

    server {
        location / {
            proxy_cache my_cache;
            proxy_pass http://myapplication;
            ...
        }
    }
}

This block creates a cache at the specified path, setting the size and the inactive time after which the cached data will be removed if not accessed. The location block turns on caching for requests.

Performance Tuning NGINX

Tuning your NGINX server for optimal performance depends on your specific application and traffic patterns, but some common practices include adjusting worker_processes, worker_connections, and keepalive_timeout settings.

events {
    worker_connections 768;
    # Optimal value depends on the number of CPU cores and load pattern
}
http {
    keepalive_timeout   30;
    # Adjust the timeout based on load and network conditions
}


Conclusion

In conclusion, NGINX proves to be an incredibly efficient and versatile tool for serving web applications. Its ability to handle concurrent connections, offload work through reverse proxying, and speed up content delivery with caching are compelling reasons to consider it for your infrastructure.

Whether you are running a simple static site, a complex application, or a microservices architecture, NGINX provides the required flexibility and performance to keep your web services running smoothly.