Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize load balancing in Nginx

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you "how to achieve load balancing in Nginx". The content is easy to understand and clear. I hope it can help you solve your doubts. Let me lead you to study and learn this article "how to achieve load balancing in Nginx".

1. Introduction of Nginx

Nginx is a high-performance Http and reverse proxy server, as well as an IMAP/POP3/SMTP server (e-mail proxy). One of the earliest purposes of developing this product is also as a mail proxy server. It is widely used in various production deployments because of its stability, rich feature set, sample configuration files, low consumption of system resources and high concurrency performance. And nginx is based on the event-driven model (epoll) to implement I-stroke O multiplexing, and processes requests in an asynchronous and non-blocking way. In the case of high connection concurrency, Nginx is a good alternative to Apache servers. And why should we choose Nginx?

II. Characteristics of Nginx

High concurrency and high performance

High reliability (can run 24 hours a day from 7 to 24 hours)

Strong scalability (highly modular design, smooth addition of modules)

As a Web server: uses fewer resources and supports more concurrent connections than Apache,Nginx

As a load balancer server: you can customize the configuration, support virtual hosts, support URL redirection, support network monitoring, and so on.

Nginx installation is very simple, the configuration file is very simple (can also support perl syntax), less Bugs

Handle static files, index files, and automatic indexing

Reverse proxy acceleration (no cache), simple load balancing and fault tolerance

Hot deployment is supported (nginx can be upgraded without stopping the server).

That's why you chose Nginx. And Nginx features more than that, just a few common features are briefly listed above.

3. Nginx load balancing

In our actual production, the processing capacity and storage space of a server is limited, do not attempt to exchange for a more powerful server, for large websites, no matter how powerful the server, can not meet the growing business needs of the site. In this case, it is more appropriate to add a server to share the access and storage pressure of the original server. In fact, this is what we call load balancing, Nginx as a load balancing server, it uses reverse proxy to load balance multiple back-end servers. First of all, let's talk about Nginx load balancing strategy and load balancing algorithm.

3.1Cognition upstream module

Upstream this module is to write a set of proxied server addresses (that is, a server is selected from the defined list of backend servers to accept users' requests), and then configure the load balancing algorithm. Let's take a look at the most basic load balancer instance:

Upstream test {server 10.20.151.114 upstream test 80; server 10.20.151.115 server 80;} server {.... Location / {proxy_pass http://test;-request to server list defined by test} 3.2 Nginx load balancing policy

(1) polling

The most basic configuration method, the above example is polling, which is the default load balancing policy of the upstream module. Each request is assigned to a different back-end server one by one in chronological order.

Upstream test {server 10.20.151.114 weight=2; 80; weight=1; server 10.20.151.115 virtual 80; weight=2;}

(2) ip_hash

Each request is allocated according to the hash result of accessing the IP, and the same IP client accesses a back-end server on a fixed basis. The session problem can be solved by ensuring that requests from the same ip are sent to a fixed machine.

Upstream test {ip_hash;-the same IP client regularly accesses a back-end server server 10.20.151.114 server; weight=1; server 10.20.151.115 IP 80; weight=2;}

(3) url_hash

The request is allocated according to the hash result of accessing the url, so that each url is directed to the same back-end server. Once the resource is cached and the request is received, it can be read from the cache.

Upstream test {hash $request_uri;-enables each url to be directed to the same back-end server server 10.20.151.114 server; weight=1; server 10.20.151.115 request_uri;; weight=2;}

(4) least_conn

Forward the request to the back-end server with fewer connections. The polling algorithm forwards requests to each backend averagely, so that their load is roughly the same; however, some requests take a long time, which will lead to a high load on the backend. In this case, least_conn can achieve a better load balancing effect.

Upstream test {least_conn;-forwards the request to the back-end server server 10.20.151.114 server with fewer connections; weight=1; server 10.20.151.115 server 80; weight=2;}

(5) weight

Weight mode, which specifies the probability of polling on the basis of the polling policy.

Upstream test {server 10.20.151.11480; weight=1; server 10.20.151.115; weight=2;-the probability of polling is higher than the previous one}

(6) fair

This algorithm can intelligently balance the load according to the page size and loading time, that is, the request can be allocated according to the response time of the back-end server, and the priority allocation of short response time.

Upstream test {server 10.20.151.114virtual 80; weight=1; server 10.20.151.115virtual 80; weight=2; fair;-achieve priority allocation with short response time}

Nginx load balancer configuration status parameters

Down: indicates that the current server does not participate in load balancer for the time being.

Backup: reserved backup machine. The backup machine is requested only when all other non-backup machines are malfunctioning or busy, so this machine is the least stressful.

Max_fails: the number of times a request is allowed to fail. Default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded.

Fail_timeout: the unit of time in seconds to suspend service after max_fails failures. Max_fails can be used with fail_timeout.

Nginx can be divided into two-tier, three-tier, four-tier and seven-tier load balancer. The so-called layer 2 is the load balance based on MAC address, layer 3 is the load balance based on IP address, layer 4 is the load balance based on IP+ port, and layer 7 is the load balance based on application layer information such as URL. Because of the long length, we will no longer make a specific introduction here, those who are interested can do their own Baidu. Here we take the seven-tier load balancer as an example.

3.3 Nginx load balancer instance

Environment preparation: prepare three Nginx servers, one as a load balancing server and the other two as back-end servers.

10.20.151.240-proxy_server (load balancer server)

10.20.151.112-server1 (backend server 1)

10.20.151.113-server2 (back-end server 2)

(1) load balancer server configuration

Vim / etc/nginx/nginx.conf-- configuration main configuration file vim / etc/nginx/conf.d/test.conf-- configuration subprofile

(2) backend server configuration

Vim / usr/local/nginx/conf/nginx.conf-- modify the configuration file vim / usr/local/nginx/html/index.html-- add test data

(3) load balancing test

Visit http://10.20.151.240/ on the browser side, and in actual production, the two pages return the same result. Here, in order to test the effect, different content is returned. And why does the refresh return different results? That is because the default balancing strategy (or algorithm) of load balancer is polling, so each refresh will return different request results from different back-end servers, reducing the access volume of a single back-end server and improving the access efficiency of the client. In order to achieve the effect of load balancing.

When I add weights (weight)

Visit http://10.20.151.240/ again

What's the difference between adding weight and not adding weight? In actual production, we generally set the weight of the server with higher configuration to a little higher, that is, when the client accesses, the server with higher weight will be requested many times, which can reduce the number of requests from the server with lower configuration, so as to better achieve load balancing.

When I add the backup status parameter

Visit http://10.20.151.240/ again

At this point, I deliberately shut down the first back-end server and continued to access http://10.20.151.240/.

When I add backup to the back-end server, it will be used as a hot backup server. The main purpose of the addition is that my hot backup server can continue to provide the same service when my other back-end servers are down. (note: the hot backup server will not work before other back-end servers go down). Therefore, load balancing can not only balance the load of each back-end server, but also ensure the stability of the back-end server by configuring the relevant transition parameters to ensure that the client request does not cause server downtime. I will not demonstrate other status parameters here (because they are all configured in the same way).

The above is all the contents of the article "how to achieve load balancing in Nginx". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report