Skip to main content

Performance Tuning

Learning Focus

Leave this lesson with a working understanding of performance tuning that you can apply immediately in production.

Nginx is fast out of the box. These settings maximize throughput and minimize latency.


Baseline First

# Measure current performance before tuning
curl -w "TTFB: %{time_starttransfer}s | Total: %{time_total}s\n" \
-o /dev/null -s https://example.com/

# Check current compression
curl -H "Accept-Encoding: gzip" -I https://example.com/ | grep content-encoding

# Check HTTP version
curl -I --http2 https://example.com/ | head -1

# Benchmark
ab -n 1000 -c 50 https://example.com/ | grep "Requests per second"

Worker Tuning (nginx.conf)

# Match CPU core count (or use auto)
worker_processes auto;

# Max open files per worker — set to match ulimit -n
worker_rlimit_nofile 65535;

events {
# Max connections per worker
# Total capacity = worker_processes × worker_connections
worker_connections 1024;

# Accept all queued connections per iteration
multi_accept on;

# Best event model for Linux
use epoll;
}
# Check and set OS file descriptor limit
ulimit -n
sudo sysctl -w fs.file-max=100000

# Make permanent
echo "fs.file-max = 100000" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

HTTP/2

listen 443 ssl http2;
# Verify HTTP/2 is negotiated
curl -I --http2 https://example.com/ | head -1
# Expected: HTTP/2 200

Compression

In http block
# Gzip (universal support)
gzip on;
gzip_types text/plain text/css application/json
application/javascript text/xml
application/xml image/svg+xml
application/x-font-ttf font/woff font/woff2;
gzip_min_length 1024; # Skip files smaller than 1KB
gzip_comp_level 5; # 1-9; 5 is good balance
gzip_vary on; # Vary: Accept-Encoding header
gzip_proxied any; # Compress for all proxy clients
# Verify gzip
curl -H "Accept-Encoding: gzip" -I https://example.com/ | grep content-encoding
# Expected: content-encoding: gzip

# Measure compression ratio
curl -H "Accept-Encoding: gzip" -s -o - https://example.com/ | wc -c
curl -s https://example.com/ | wc -c

Browser Caching

In server block
# Fingerprinted static assets — cache forever (content hash in filename)
location ~* \.(css|js|png|jpg|jpeg|gif|ico|webp|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, max-age=31536000, immutable";
access_log off; # Skip logging for static files
}

# HTML — short cache (content changes frequently)
location ~* \.html$ {
expires 1h;
add_header Cache-Control "public, max-age=3600, must-revalidate";
}

# API responses — no caching
location /api/ {
add_header Cache-Control "no-store, no-cache, must-revalidate";
proxy_pass http://127.0.0.1:3000;
}

Keep-Alive and Buffer Settings

In http block
# Keep-alive
keepalive_timeout 65; # How long to hold idle connection
keepalive_requests 1000; # Max requests per connection

# Buffers
client_body_buffer_size 128k;
client_header_buffer_size 1k;
large_client_header_buffers 4 8k;

# Sendfile (use kernel for file delivery — bypasses userspace copy)
sendfile on;
tcp_nopush on; # Send response header + start of file together
tcp_nodelay on; # No buffering on keep-alive connections

# Upstream keep-alive (for proxy)
proxy_http_version 1.1;
proxy_set_header Connection "";
keepalive 32; # In upstream block — idle connections to backend

Proxy Cache (For Reverse Proxy Sites)

In http block
proxy_cache_path /var/cache/nginx
levels=1:2
keys_zone=app_cache:10m
max_size=1g
inactive=60m
use_temp_path=off;
In location block
proxy_cache app_cache;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_use_stale error timeout updating;
add_header X-Cache-Status $upstream_cache_status;

# Bypass cache for logged-in users / POST requests
proxy_cache_bypass $http_pragma $http_authorization;
proxy_no_cache $http_pragma $http_authorization;
# Check cache is working
curl -I https://example.com/ | grep X-Cache-Status
# Expected: X-Cache-Status: HIT (after first request)

sendfile vs. No sendfile

SettingHow it worksBest for
sendfile offNginx reads file to userspace, then sendsSmall files, development
sendfile onOS kernel sends file directly to socketLarge files, production
sendfile on; tcp_nopush onBatch header + beginning of fileBest for most static serving

Quick Wins Checklist

OptimizationImpactWhere
Enable OPcacheVery Highphp.ini
Enable HTTP/2Highlisten 443 ssl http2
Enable gzipHighhttp block
Browser cache headersHighlocation block
sendfile onMediumhttp block
UNIX socket for FPMMediumfastcgi_pass unix:/...
keepalive_requests 1000Mediumhttp block
Upstream keepalive 32Mediumupstream block
multi_accept onLowevents block

Benchmarking

# Apache Bench
ab -n 5000 -c 50 https://example.com/

# wrk (modern, better statistics)
wrk -t4 -c100 -d30s https://example.com/

# hey (simple, Go-based)
hey -n 10000 -c 100 https://example.com/

# Watch workers during load test
watch -n1 'ps aux | grep "nginx: worker" | grep -v grep | wc -l'
watch -n1 'free -m'