Skip to main content

Reverse Proxy and Load Balancing

Nginx's most powerful use case is as a reverse proxy. It sits in front of your application servers — Node.js, Python, Go, PHP-FPM — buffers requests, handles SSL termination, and distributes load.


The Proxy Mental Model

Client → Nginx (public-facing)
├── SSL termination
├── Compression
├── Rate limiting
├── Static files served directly
└── proxy_pass → Upstream app (internal port)
├── Node.js :3000
├── Python/Gunicorn :8000
├── Go app :8080
└── Another Nginx instance

What You Will Learn

  • How proxy_pass works and what headers must be forwarded
  • How to define upstream blocks for multiple backend servers
  • Load balancing strategies: round-robin, least connections, IP hash, weighted
  • How to pass the real client IP through to the backend
  • Health checks and failure handling
  • WebSocket proxying
  • Buffering and timeout settings

Topics in This Module


Best Practices

  • Always set proxy_set_header X-Real-IP $remote_addr; — backends need the real client IP
  • Set proxy_set_header Host $host; to pass the correct hostname to the backend
  • Use proxy_connect_timeout and proxy_read_timeout to avoid hanging workers
  • For WebSockets, add proxy_http_version 1.1 and the Upgrade/Connection headers
  • Use least_conn for long-lived connections (WebSockets, uploads); round_robin for short API calls

Success Checkpoint

By the end of this module you should be able to proxy traffic to any HTTP backend, configure upstream load balancing, forward the correct headers, and handle upstream failures gracefully.