Sometimes a single server can no longer handle the growing number of requests — the website starts slowing down, and users begin to complain. That’s when it’s time to think about scaling. The most logical step is to add more servers and distribute the traffic between them. This process is called load balancing. In this guide, you’ll learn how to set up load balancing with Nginx quickly and easily.
Nginx is one of the most popular web servers and reverse proxies. Installing it takes just a couple of commands.
For Debian/Ubuntu:
sudo apt update
sudo apt install nginx
For CentOS, AlmaLinux, Rocky Linux:
sudo dnf install nginx
After installation, make sure to start and enable the service:
sudo systemctl enable --now nginx
Nginx uses a special block called upstream to manage load balancing. Inside this block, you define all your backend servers and specify how requests should be distributed between them.
Basic Example:
upstream myapp {
server 192.168.1.101;
server 192.168.1.102;
server 192.168.1.103;
}
Load Balancing Algorithms
→ round-robin (default) — requests are sent to each server in turn, evenly distributing the load.
→ least_conn — selects the server with the fewest active connections.
→ ip_hash — sends requests from the same user IP to the same server each time.
Server Parameters
→ weight — sets server priority. For example:
server 192.168.1.101 weight=5;
server 192.168.1.102 weight=1;
In this case, the server with weight 5 will receive more requests.
→ max_fails and fail_timeout — temporarily exclude a server if it stops responding. Example:
server 192.168.1.101 max_fails=3 fail_timeout=30s;
Add the upstream block to your Nginx configuration and set up the proxy:
http {
upstream myapp {
server 192.168.1.101;
server 192.168.1.102;
}
server {
listen 80;
location / {
proxy_pass http://myapp;
}
}
}
Beyond HTTP
Nginx can also balance other types of connections, such as:
→ FastCGI — fastcgi_pass
→ uWSGI — uwsgi_pass
→ SCGI — scgi_pass
→ Memcached — memcached_pass
→ gRPC — grpc_pass
First, check for configuration errors:
sudo nginx -t
If everything is correct, apply the changes:
sudo nginx -s reload
You’ve just set up a fully functional load balancer. Nginx will now automatically decide which server should handle the next request, considering both server priorities and the number of active connections.
This setup helps prevent overloads and prepares your project for future growth. Want to scale even further? Simply add more servers to the upstream block — Nginx can handle it effortlessly.