In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Main points of content:
1. Common Web cluster schedulers:
Second, Haproxy application analysis:
Third, the principle of Haproxy scheduling algorithm
4. Experimental examples:
Step 1: set up two Nginx servers
Step 2: build Haproxy
Step 3: access the scheduling server (192.168.100.100/test.html) with a browser on win7
Step 4: optimize the log
1. Common Web cluster schedulers:
At present, the common Web cluster scheduler is divided into software and hardware, software usually uses open source LVS, Haproxy, Nginx, hardware generally uses F5, and many people use some domestic products, such as barracuda, Green League and so on.
Second, Haproxy application analysis:
(1) LVS has a strong ability to resist load in enterprise applications, but it has some shortcomings:
LVS does not support regular processing and cannot achieve static and dynamic separation.
For large websites, the implementation configuration of LVS is complex, and the maintenance cost is relatively high.
(2) Haproxy is a software that provides high availability, load balancing, and agents based on TCP and HTTP applications.
It is especially suitable for Web sites with heavy load.
Running on current hardware can support tens of thousands of concurrent connection requests
Third, the principle of Haproxy scheduling algorithm
Haproxy supports a variety of scheduling algorithms, of which three are the most commonly used:
(1) RR:
RR algorithm is the simplest and most commonly used algorithm, that is, polling scheduling.
Understand examples
There are three nodes A, B, C, the first user access will be assigned to node A, the second user access will be assigned to node B, and the third user access will be assigned to the node
The fourth user access continues to be assigned to node A, polling the allocation access request to achieve load balancing.
(2) LC:
The LC algorithm is the minimum number of connections algorithm, which dynamically allocates front-end requests according to the number of node connections at the back end.
Understand examples
There are three nodes A. B and C, and the connection number of each node is A 4, B 5 and C 6 respectively. In this case, if the first user connection request is made, it will be assigned to A, and the number of connections will be A 5, 5, and 6.
The second user request will continue to be assigned to A, and the number of connections will be changed to ARV 6, BRV 5, and CRV 6. New requests will be assigned to B, and each new request will be assigned to the client with the least number of connections.
Because in the actual situation, the number of connections of A. B and C will be released dynamically, so it is difficult to have the same number of connections, so this algorithm has a great improvement compared with r algorithm.
(3) SH:
SH is based on the source access scheduling algorithm, and the sub-algorithm is used for some scenarios where Session sessions are recorded on the server. Cluster scheduling can be done based on the source IP, Cookie, and so on.
Understand examples
There are three nodes A. B, C. the first user is assigned to A for the first visit, and the second user for the first visit is assigned to B.
The first user will continue to be assigned to An on the second visit, and the second user will still be assigned to B on the second visit. As long as the load balancer scheduler does not restart, the first user access will be assigned to A, and the second user access will be assigned to B to realize the scheduling of the cluster.
The advantage of this scheduling algorithm is to achieve session persistence, but when some IP visits are very large, it will cause load imbalance, and the visit volume of some nodes is too large, which affects business usage.
4. Experimental examples:
(1) Environmental preparation:
Two Nginx servers, one Haproxy scheduling server and one window for testing
Write two test pages on two Nginx servers
All host network cards are set to host-only mode, and the network card information is modified.
The client can access the two node servers by accessing the scheduling server without the need to access the real server address.
Role IP address Haproxy192.168.100.100Nginx 01192.168.100.201Nginx 02192.168.100.202window7 (for testing) 192.168.100.50
Step 1: set up two Nginx servers
About the construction of nginx, you can see my previous blog, there are detailed construction process and details, the details will not be written here.
After building the Nginx service, add a test home page to the Nginx home page html to verify the experimental results:
Number one:
Echo "this is kgc web" > / usr/local/nginx/html/tset.html
Station 2:
Echo "this is accp web" > / usr/local/nginx/html/tset.html
Step 2: build Haproxy
(1) install the compilation tool:
Yum install bzip2-devel pcre-devel gcc gcc-c++ make-y
(2) you can mount the haproxy package locally by remote mount, and then decompress it:
Tar zxvf haproxy-1.5.19.tar.gz-C / opt/ extract to / opt directory
(3) compile:
Make TARGET=linux3100 Note: for this version number, you can use the uname-a command to check the version number make install
(4) create files, back up and modify:
Mkdir / etc/haproxycp examples/haproxy.cfg / etc/haproxy/vim / etc/haproxy/haproxy.cfgglobal log 127.0.0.1 local0 log 127.0.0.1 local1 notice # log loghost local0 info maxconn 4096 uid 99 gid 99 daemon # debug # quietdefaults log global mode http option httplog option dontlognull retries 3 Maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webcluster 0.0.0.0:80 option httpchk GET / test.html balance roundrobin server inst1 192.168.100.201:80 check inter 2000 fall 3 server inst2 192.168.100.202:80 check inter 2000 fall 3
(5) start the script:
Cp examples/haproxy.init / etc/init.d/haproxy / / copy to init.d startup process chmod + x / etc/init.d/haproxy / / add execution permission chkconfig-- add haproxy / / add to service to make it easy to identify ln-s / usr/local/sbin/haproxy / usr/sbin/haproxy / / create a soft connection
(6) start the service:
Service haproxy start
Turn off environments such as firewalls
[root@Haproxy haproxy-1.5.19] # systemctl stop firewalld.service [root@Haproxy haproxy-1.5.19] # setenforce 0
Step 3: access the scheduling server (192.168.100.100/test.html) with a browser on win7
Http://192.168.100.100/test.html
Step 4: optimize the log
By optimizing and modifying the configuration file of the scheduler, the normal access information and error information can be stored in different log files to facilitate management; the log of Haproxy is output to the syslog of the system by default, which is generally defined separately in the production environment.
(1) modify the main configuration file
Vim/etc/haproxy/haproxy.cfg modify global section: log / dev/log local0 info / / normal access information storage log / dev/log local0 notice / / prompts, warnings and other information storage place
(2) restart the service
Service haproxy restart
(3) create files and add rules
[root@Haproxy haproxy] # touch / etc/rsyslog.d/haproxy.conf [root@Haproxy haproxy] # cd / etc/rsyslog.d/ [root@Haproxy rsyslog.d] # vim haproxy.conf if ($programname = = 'haproxy' and $syslogseverity-text = =' info') then-/ var/log/haproxy/haproxy-info.log&~if ($programname = = 'haproxy' and $syslogseverity-text = =' notice') then-/ var/log/haproxy/haproxy-notice.log&~
(4) restart the log service
Systemctl restart rsyslog.service
(5) View the log file: / var/log/haproxy/haproxy-info.log
Cat haproxy-info.log
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.