In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
To understand load balancing, you must first understand forward and reverse proxies.
Note:
Forward agent, the agent is the user. Reverse proxy, the agent is the server
What is load balancing?
The greater the number of visits per unit time of a server, the greater the pressure on the server, and when it exceeds its capacity, the server will crash. In order to avoid server crashes and give users a better experience, we share the server pressure through load balancing.
We can set up many servers to form a server cluster. When users visit a website, they first visit an intermediate server, and then let the intermediate server select a less stressful server in the server cluster, and then introduce the access request to the server. In this way, each user visit will ensure that the pressure of each server in the server cluster tends to balance, share the server pressure, and avoid the server crash.
Load balancing is realized by the principle of reverse proxy.
Several common ways of load balancing
1. Polling (default)
Each request is assigned to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically eliminated.
Upstream backserver {server 192.168.0.14; server 192.168.0.15;}
2 、 weight
Specify the polling probability, where weight is proportional to the access ratio, for those with uneven back-end server performance.
Situation.
Upstream backserver {server 192.168.0.14 weight=3; server 192.168.0.15 weight=7;}
The higher the weight, the greater the probability of being accessed. As in the above example, it is 30% and 70% respectively.
3. There is a problem with the above method, that is, in a load balancing system, if a user logs in on a certain server, then when the user requests for the second time, because we are a load balancing system, each request will be relocated to one of the server clusters, then the user who has logged in to one server will relocate to another server, and his login information will be lost. This is obviously inappropriate.
We can use the ip_hash instruction to solve this problem. If the customer has already accessed a server, when the user accesses it again, the request will be automatically located to the server through the hash algorithm.
Each request is allocated according to the hash result of accessing the ip, so that each visitor accesses a back-end server on a regular basis, which can solve the session problem.
Upstream backserver {ip_hash; server 192.168.0.14 88; server 192.168.0.15 purl 80;}
4. Fair (third party)
Requests are allocated according to the response time of the back-end server, and priority is given to those with short response time.
Upstream backserver {server server1; server server2; fair;}
5. Url_hash (third party)
Allocate requests according to the hash result of accessing url, so that each url is directed to the same backend server, which is more effective when the backend server is cached.
Upstream backserver {server squid1:3128; server squid2:3128; hash $request_uri; hash_method crc32;} 123456
The status of each device is set to:
Down means that a single server does not participate in the load temporarily. By default, the larger the weight, the greater the load weight. Max_fails: the number of requests allowed to fail defaults to 1. When the maximum number of times is exceeded, the time paused after the error fail_timeout:max_fails defined by the proxy_next_upstream module failed. Backup: all other non-backup machines down or request backup machines when they are busy. So this machine will be the least stressed.
Configuration example:
# user nobody;worker_processes 4 ip_hash events {# maximum concurrency worker_connections 1024;} http {# optional server list upstream myproject {# server directive to bring the same user to the same server. Ip_hash; server 125.219.42.4 fail_timeout=60s; server 172.31.2.183;} server {# listening port listen 80; # location / {# which server list proxy_pass http://myproject;}} under the root directory
Summary
The above is the whole content of this article. I hope the content of this article has a certain reference and learning value for everyone's study or work. Thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.