Skip to main content

Reverse Proxy Basics

Learning Focus

Leave this lesson with a working understanding of reverse proxy basics that you can apply immediately in production.

Nginx is the most common reverse proxy in production. It handles SSL termination, compression, rate limiting, and static files — then passes dynamic requests to your backend app.


Minimal Reverse Proxy

server {
listen 80;
server_name example.com;

location / {
proxy_pass http://127.0.0.1:3000;
}
}

This works, but is incomplete. Missing headers will cause problems in your application.


Production Reverse Proxy Configuration

/etc/nginx/conf.d/app.conf
# Upstream block — define your backend(s)
upstream app_backend {
server 127.0.0.1:3000;
keepalive 32; # Keep 32 idle connections to backend
}

server {
listen 443 ssl http2;
server_name example.com;

ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;

access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log warn;

# ---- Security ----
server_tokens off;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;

# ---- Static files served directly (bypass backend) ----
location /static/ {
root /var/www/example.com;
expires 1y;
add_header Cache-Control "public, immutable";
}

# ---- Proxy to backend ----
location / {
proxy_pass http://app_backend;

# HTTP version for keep-alive
proxy_http_version 1.1;
proxy_set_header Connection "";

# Pass real client information
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

# Timeouts
proxy_connect_timeout 10s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
}
}

# HTTP → HTTPS redirect
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}

Why Each Header Matters

HeaderSet ToWhy
Host$hostBackend needs to know which domain was requested
X-Real-IP$remote_addrReal client IP (before Nginx)
X-Forwarded-For$proxy_add_x_forwarded_forFull chain of proxied IPs
X-Forwarded-Proto$schemeWas the original request HTTP or HTTPS?
Connection"" (empty)Required for HTTP/1.1 keep-alive to upstream

Without these headers, your backend will see all requests coming from 127.0.0.1 (Nginx itself) and will not know the original scheme or client IP.


WebSocket Proxying

WebSockets require an HTTP upgrade — you must add these headers:

location /ws/ {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;

# Required for WebSocket upgrade
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;

# Keep WebSocket connections alive longer
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}

Upstream Block Patterns

# Single backend
upstream app {
server 127.0.0.1:3000;
}

# Multiple backends (round-robin by default)
upstream app {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}

# With weights
upstream app {
server 127.0.0.1:3000 weight=5; # Gets 5x more traffic
server 127.0.0.1:3001 weight=1;
}

# Least connections
upstream app {
least_conn;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
}

# IP hash (sticky sessions)
upstream app {
ip_hash;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
}

# With failure detection
upstream app {
server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3001 max_fails=3 fail_timeout=30s backup;
}

Proxy Cache (Optional)

# Define cache zone in http block (nginx.conf or conf.d/cache.conf)
proxy_cache_path /var/cache/nginx/app
levels=1:2
keys_zone=app_cache:10m
max_size=1g
inactive=60m
use_temp_path=off;

# In server block — use the cache
location / {
proxy_pass http://app_backend;
proxy_cache app_cache;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
proxy_cache_key "$scheme$request_method$host$request_uri";

# Add cache status header for debugging
add_header X-Cache-Status $upstream_cache_status;

# Bypass cache for certain conditions
proxy_cache_bypass $http_cache_control;
proxy_no_cache $http_cache_control;
}

Test the Proxy

# Test response from server
curl -I https://example.com/

# Check headers (are upstream headers present?)
curl -sv https://example.com/ 2>&1 | grep "^<"

# Test with specific Host header (useful for local testing)
curl -H "Host: example.com" http://localhost/

# Is the backend actually running?
curl -I http://127.0.0.1:3000/

# Check if proxy is passing headers correctly
curl -s https://example.com/debug/headers | head -20

Hands-On Practice

# Verify Nginx is running
sudo systemctl status nginx

# Test config syntax
sudo nginx -t

# Reload without downtime
sudo nginx -s reload

# Check error log
sudo tail -20 /var/log/nginx/error.log

Common Pitfalls

PitfallWhat happensFix
Editing config without reloadingChanges not appliedsudo nginx -s reload after every edit
Not running nginx -t firstReload breaks with syntax errorAlways test syntax before reloading
Wrong socket path for PHP-FPM502 Bad Gatewayls /run/php/ and verify the exact socket filename

What's Next

  • Continue to the next lesson in this module, or go to the module index for an overview.
  • Use the Cheatsheets for quick CLI reference.