NGINX AIO, Sendfile, and DirectIO: Explained with Examples

Updated: January 20, 2024 By: Guest Contributor Post a comment

Introduction

NGINX, the high-performance web server and reverse proxy, is known for its high scalability and low resource consumption. A part of NGINX’s efficiency comes from several built-in mechanisms that optimize file delivery. Among these features are Asynchronous I/O (AIO), Sendfile, and DirectIO. In this guide, we’ll take a deep dive into these components, explain how they work, and provide practical examples.

What is Sendfile in NGINX?

Sendfile is an optimized way to serve static files. By leveraging the sendfile system call, NGINX minimizes the CPU usage by eliminating redundant data copying between file descriptors.

server {
     location /files/ {
         sendfile           on;
         sendfile_max_chunk 512k;
         root               /var/www/html/files;
     }
 }

Example output: Faster response and minimized CPU load while serving static files.

Asynchronous I/O (AIO) in NGINX

NGINX supports AIO on Linux systems with the aio and directio directives. AIO allows simultaneous, non-blocking IO operations.

server {
     location /video/ {
         aio                 on;
         directio            8m;
         output_buffers      1 8m;
         root                /var/www/html/video;
     }
 }

Example output: Smooth and asynchronous delivery of video files without I/O blockages.

Integrating DirectIO in NGINX Configurations

DirectIO bypasses kernel buffering to provide unbuffered I/O, cutting through the system cache for frequent large file serves.

server {
     location /bigfiles/ {
         directio            4m;
         root                /var/www/html/bigfiles;
     }
 }

Example output: Reduced memory footprint while accessing large files directly from disk.

Combining AIO and Sendfile

In some case, combining AIO with Sendfile is possible for achieving better file serving performance. However, usage of sendfile often negates the effect of aio.

server {
     location /downloads/ {
         sendfile            on;
         sendfile_max_chunk  1m;
         aio                 threads;
         root                /var/www/html/downloads;
     }
 }

Example output: High throughput for mixed file size workloads.

Advanced Configuration: AIO with SSL and Thread Pool

Serving SSL content with AIO can be enhanced using a thread pool for scalable, non-blocking encryption and file transfers.

server {
     listen                 443 ssl;
     location /secure/ {
         aio                 threads;
         aio_write           on;
         ssl_certificate     /etc/ssl/mydomain.com.crt;
         ssl_certificate_key /etc/ssl/mydomain.com.key;
         root                /var/www/html/secure;
     }
 }

Example output: Secure, non-blocking SSL transmission with enhanced AIO performance.

Performance Testing and Benchmarking

Make use of tools like ab (Apache Bench) or wrk to benchmark NGINX performance with different directives applied.

# Benchmarking NGINX with sendfile on
ab -n 10000 -c 100 http://your-domain.com/files/testfile.txt

Example output: You would get statistics like Requests per second, Time per request, etc.

# Benchmarking NGINX with aio and directio
ab -n 10000 -c 100 http://your-domain.com/bigfiles/largefile.zip

Example output: Likewise, comparative metrics will quantify the aio and directio’s advantages.

Conclusion

In conclusion, Sendfile, AIO, and DirectIO each play a vital role in optimizing file delivery in NGINX. While Sendfile streamlines serving of static content, AIO and DirectIO are key for high concurrency and large file handling, respectively. For peak performance, an NGINX admin should consider a combined optimization strategy based on specific use-cases.