Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement load balancing algorithm in Nginx

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article is about how to implement load balancing algorithm in Nginx. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article. Let's follow the editor to have a look.

1. How is the algorithm of Nginx load balancing implemented? What are the strategies?

Load balancing is a common function of Nginx. When a server has more visits per unit time, the pressure on the server is greater, and when it exceeds its capacity, the server will crash.

In order to avoid server crashes, people will share the server pressure through load balancing. The peer-to-peer servers form a cluster, and when users access it, they first access a forwarding server, and then the forwarding server distributes the access to less stressed servers.

There are five strategies for implementing Nginx load balancer:

(1) polling (default)

Each request is allocated to different backend servers one by one in chronological order, and if a backend server goes down, the faulty system can be eliminated automatically.

Upstream backserver {server 192.168.0.12; server 192.168.0.13;}

(2) weight weight

The higher the value of weight, the higher the access probability, which is mainly used in the case of uneven performance of each server in the backend. The second is to set different weights in the case of master and slave to achieve reasonable and effective use of host resources.

Upstream backserver {server 192.168.0.12 weight=2; server 192.168.0.13 weight=8;}

The higher the weight, the greater the probability of being accessed. As in the above example, it is 20% and 80% respectively.

(3) ip_hash (IP binding)

Each request is allocated according to the hash result of accessing IP, so that visitors from the same IP have regular access to a back-end server, and can effectively solve the session sharing problem of dynamic web pages.

Upstream backserver {ip_hash; server 192.168.0.12 88; server 192.168.0.13 purl 80;}

(4) fair (third-party plug-in)

The upstream_fair module must be installed.

Compared with the more intelligent load balancing algorithms of weight and ip_hash, fair algorithm can intelligently balance the load according to the page size and loading time, and give priority to short response time.

Upstream backserver {server server1; server server2; fair;}

The server that responds quickly will assign the request to that server.

(5) url_hash (third-party plug-in)

Hash package for Nginx must be installed

Allocate requests according to the hash results of accessing url, so that each url is directed to the same back-end server, which can further improve the efficiency of the back-end cache server.

Upstream backserver {server squid1:3128; server squid2:3128; hash $request_uri; hash_method crc32;}

two。 Why do we have to separate movement and movement?

Nginx is the hottest Web container at present, the important point of website optimization is static website, and the key point of website static is static separation, which allows dynamic web pages in dynamic websites to distinguish immutable resources from frequently changing resources according to certain rules. After dynamic and static resources are split, we cache them according to the characteristics of static resources.

Let static resources only go static resource servers, dynamic dynamic servers

The static processing capacity of Nginx is very strong, but the dynamic processing capacity is insufficient, so the static and dynamic separation technology is commonly used in enterprises.

For static resources such as images, js,css and other files, we cache them in the reverse proxy server nginx. In this way, when the browser requests a static resource, the proxy server nginx can process it directly without having to forward the request to the back-end server tomcat.

If the dynamic file requested by the user, such as servlet,jsp, is forwarded to the Tomcat server for processing, so as to achieve static and dynamic separation. This is also an important role of the reverse proxy server.

The above is how to implement the load balancing algorithm in Nginx. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report