Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How Nginx achieves load balancing through upstream and proxy_pass

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

The following brings you how Nginx achieves load balancing through upstream and proxy_pass, hoping to give you some help in practical application. Load balancing involves more things, not much theory, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.

Nginx load balancing

Https://coding.net/u/aminglinux/p/nginx/git/blob/master/proxy/lb.md

Load balancing configuration of Nginx

Nginx achieves load balancing through upstream and proxy_pass. In essence, it is also a reverse proxy function of Nginx, except that there are multiple server at the back end.

Case 1 (simple polling)

Upstream www {

Server 172.37.150.109:80

Server 172.37.150.101:80

Server 172.37.150.110:80

}

Server {

Listen 80

Server_name www.aminglinux.com

Location / {

Proxy_pass http://www/;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

}

Note: when there are multiple proxied machines, you need to use upstream to define a server group

The www name can be customized and referenced in the following proxy_pass.

In this way, nginx sends a balanced poll of requests to the three servers in the www group.

Case 2 (weighted polling + ip_hash algorithm)

Upstream www {

Server 172.37.150.109:80 weight=50

Server 172.37.150.101:80 weight=100

Server 172.37.150.110:80 weight=50

Ip_hash

}

Server {

Listen 80

Server_name www.aminglinux.com

Location / {

Proxy_pass http://www/;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

}

Note: weights can be assigned to the three machines in the www group. The higher the weight, the more requests will be assigned.

Ip_hash is a nginx load balancing algorithm. The principle is very simple. It calculates a value according to the client IP to which the request belongs, and then sends the request to the corresponding backend.

So requests from the same client will be sent to the same backend unless the backend is unavailable. Ip_hash can achieve the effect of maintaining a session.

Case 3 (other upstream configurations)

Upstream www {

Server 172.37.150.109:80 weight=50 max_fails=3 fail_timeout=30s

Server 172.37.150.101:80 weight=100

Server 172.37.150.110:80 down

Server 172.37.150.110:80 backup

}

Server

{

Listen 80

Server_name www.aminglinux.com

Location / {

Proxy_next_upstream off

Proxy_pass http://www/;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

}

Description: down, which means that the current server does not participate in load balancing

Backup, the reserved machine, will only request the backup machine when other server (non-backup) fails or is busy

Max_fails, which allows the number of failed requests. The default is 1. When the number of failures reaches this value, it is considered that the machine down has dropped. The indicator of failure is defined by the proxy_next_upstream module, where the 404 status code is not considered a failure.

Fail_timeount, which defines the timeout of a failure, that is, reaching max_fails within that period of time, is a real failure. The default is 10 seconds.

Proxy_next_upstream, through the response status code returned by the back-end server, indicates whether the server is dead or alive, and you can flexibly control whether the back-end machine is added to the distribution list.

Syntax: proxy_next_upstream error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_404 | off.

Default value: proxy_next_upstream error timeout

An error occurred when error # established a connection with the back-end server, or sent a request to the back-end server, or received a response header from the back-end server

A timeout occurs when timeout # establishes a connection with the back-end server, or sends a request to the back-end server, or receives a response header from the back-end server

Invalid_header # backend server returned null response or illegal response header

The response status code returned by the http_500 # backend server is 500,

The response status code returned by the http_502 # backend server is 502.

The response status code returned by the http_503 # backend server is 503.

The response status code returned by the http_504 # backend server is 504.

The response status code returned by the http_404 # backend server is 404.

Off # stops sending requests to the next back-end server

Case 4 (depending on the uri)

Upstream aa.com {server 192.168.0.121; server 192.168.0.122;} upstream bb.com {server 192.168.0.123; server 192.168.0.124;} server {listen 80; server_name www.aminglinux.com Location ~ aa.php {proxy_pass http://aa.com/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;} location ~ bb.php {proxy_pass http://bb.com/; proxy_set_header Host $host Proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;} location / {proxy_pass http://bb.com/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;}

}

Description: those who request aa.php will go to the aa.com group, those who request bb.php will go to bb.com, and all other requests will go to bb.com.

Case 5 (according to different catalogs)

Upstream aaa.com

{

Server 192.168.111.6

}

Upstream bbb.com

{

Server 192.168.111.20

}

Server {

Listen 80

Server_name www.aminglinux.com

Location / aaa/

{

Proxy_pass http://aaa.com/aaa/;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

Location / bbb/

{

Proxy_pass http://bbb.com/bbb/;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

Location /

{

Proxy_pass http://bbb.com/;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

}

The stand-alone load balancer we tested actually acts as a multi-distribution function.

After reading the above about how Nginx achieves load balancing through upstream and proxy_pass, if there is anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineers for answers. Technical engineers have more than ten years of experience in the industry.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report