Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure load balancing in Nginx

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article will explain in detail how to configure load balancing in Nginx. The content of the article is of high quality, so the editor will share it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.

1. Backend server

The backend server can be set through upstream, which can be specified by IP address and port, domain name, UNIX socket (socket). If the domain name can be resolved to multiple addresses, these addresses are used as backend. The following examples are given:

Upstream backend {server blog.csdn.net/poechant; server 145.223.156.89 upstream backend 8090; server unix:/tmp/backend3;}

The first backend is specified with a domain name. The second backend is specified with IP and port number. The third backend is specified using the UNIX socket.

2. Load balancing strategy

Nginx provides three ways: polling (round robin), user IP hash (client IP) and specifying weights.

By default, Nginx will provide you with polling as a load balancing strategy. But it doesn't necessarily satisfy you. For example, if a series of visits within a certain period of time are initiated by the same user Michael, the first Michael request may be backend2, and the next request may be backend3, then backend1, backend2, backend3... In most application scenarios, this is not efficient. Of course, that's why Nginx provides you with a way to hash according to the IP of messy users such as Michael, Jason, David, and so on, so that every client access request is dumped to the same back-end server. The specific ways of use are as follows:

Upstream backend {ip_hash; server backend1.example.com; server backend2.example.com; server.backend3.example.com;}

In this strategy, the key used for hash operations is the class C IP address of client (the class C IP address is in the range of 192.0.0.0 to 223.255.255.255, the first three numbers represent the subnet, and the fourth number is the IP address category of the local host). This approach ensures that an client will arrive at the same backend for each request. Of course, if the backend to which you hash is currently unavailable, the request will be transferred to another backend.

Introduce another keyword used with ip_hash: down. When a server has a temporary down, you can mark it with "down" so that the marked server will not accept requests for processing. The details are as follows:

Upstream backend {server blog.csdn.net/poechant down; server 145.223.156.89 upstream backend 8090; server unix:/tmp/backend3;}

You can also specify a weight (weight) as follows:

Upstream backend {server backend1.example.com; server 123.321.123.321:456 weight=4;}

By default, weight is 1, and for the above example, the weight of the first server is 1 by default, and the second is 4, so it is equivalent to 20% of the requests received by the first server and 80% by the second. It is important to note that weight and ip_hash cannot be used at the same time, simply because they are different and conflicting policies.

3. Retry strategy

You can specify the maximum number of retries and the retry interval for each backend. The keywords used are max_fails and fail_timeout. As follows:

Upstream backend {server backend1.example.com weight=5; server 54.244.56.3:8081 max_fails=3 fail_timeout=30s;}

In the above example, the maximum number of failures is 3, that is, a maximum of 3 attempts are made, and the timeout is 30 seconds. The default value of max_fails is 1, and the default value is 10s. The case in which the transmission fails, specified by proxy_next_upstream or fastcgi_next_upstream. And you can use proxy_connect_timeout and proxy_read_timeout to control the upstream response time.

One thing to note is that when there is only one server in the upstream, the max_fails and fail_timeout parameters may not work. The resulting problem is that nginx will only try the upstream request once, and if it fails, the request will be abandoned: (... The solution, more ingenious, is to write your poor only server a few more times in upstream, as follows:

Upstream backend {server backend.example.com max_fails fail_timeout=30s; server backend.example.com max_fails fail_timeout=30s; server backend.example.com max_fails fail_timeout=30s;} 4, standby policy

Starting with version 0. 6. 7 of Nginx, the "backup" keyword can be used. When all non-backup are down (down) or busy (busy), only the standby marked by backup is used. It is important to note that backup cannot be used with the ip_hash keyword. Examples are as follows:

Upstream backend {server backend1.example.com; server backend2.example.com backup; server backend3.example.com;} this is the end of how to configure load balancer in Nginx. I hope the above content can be helpful to you and learn more. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report