Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the common load balancing software?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Load balancing (LB) software

Common load balancing software: LVS, Nginx, Haproxy

LVS:

1)。 Based on the layer 4 network protocol, almost no traffic is generated, which also determines that the load capacity of these load balancing software is the strongest, and the memory and CPU resources are also low.

2)。 A wide range of applications, not only for Web services to do load balancing, but also combined with other applications to do load balancing, such as LVS+MySQL load balancing.

3)。 The configuration is simple, but there are few things to configure.

4)。 There is no traffic, LVS only distributes requests, and traffic does not go out from itself, which ensures that the performance of equalizer IO will not be affected by large traffic.

5)。 There is a concept of virtual IP.

Nginx:

1)。 Based on layer 7 network protocol, make diversion strategy for Http application, such as configuring domain name.

2)。 High load and stability. Support tens of thousands of high concurrency. The load capacity is less than LVS.

3)。 It is easy to install and configure, and supports more regularities than Haproxy. And the dependence on network stability is very small.

4)。 The internal fault of the server can be detected through the port, such as the error request is resubmitted to another node according to the status code and timeout returned by the web page processed by the server.

5)。 As a Web server.

6)。 Reverse proxy\ load balancing.

Haproxy:

1)。 Support virtual host, can work in layer 4, layer 7.

2)。 Haproxy is better than Nginx in load balancing efficiency and better than Nginx in concurrent processing.

3)。 Can supplement some of the shortcomings of Nginx, such as support for Session maintenance, Cookie boot. At the same time, you can detect the status of the backend server by getting the specified url.

4)。 There are many load balancing strategies. Such as roundrobin simple polling, leastconn minimum number of server connections, static-rr weight polling, uri hash, sourceIP hash, URL parameters of url_param requests, etc. LVS+Keepalived

LVS is an open source software that can achieve load balancing on the Linux platform. LVS is the abbreviation of Linux Virtual Server, which means Linux virtual server. After version 2.4 of the Linux kernel, LVS is already part of the Linux kernel standard.

The main work of LVS is to provide scheduling algorithm and schedule client requests to Real Server according to requirements. The main job of Keepalived is to provide a redundancy of LVS controller, and to check the health of Real Server and find Real Server faults, then remove it from the LVS cluster, and Real Server is only responsible for providing services. Such as LVS+Keepalived+Nginx mode, Nginx as Real Server.

LVS forwarding mode

VS/NAT (Virtual Server via Network Address Translation): through network address translation, the scheduler rewrites the target address of the request message and requests to the backend Real Server. When the response message of the backend Real Server passes through the scheduler, the source address of the message is rewritten and returned to the customer to complete the whole load scheduling process. Where the customer requests to come back from.

VS/TUN (Virtual Server via IP Tunneling): when using the above NAT technology, because the request and response packets must be rewritten by the scheduler address, as the customer requests more and more, the processing capacity of the scheduler will become a bottleneck. To solve this problem, the scheduler forwards the request message to Real Server through the IP tunnel, while Real Server returns the response directly to the customer, so the scheduler only processes the request message. Since the response of general network service is much larger than the request message, the maximum throughput of the cluster system can be increased by 10 times after adopting VS/TUN technology.

VS/DR (Virtual Server via Direct Routing): VS/DR sends the request to Real Server by rewriting the MAC address of the request message, while the back-end Real Server returns the response directly to the customer. Like VS/TUN technology, VS/DR technology can greatly improve the scalability of cluster system. Follow this VS/DR pattern as an example. LVS (VS/DR) + Keepalived highly available cluster instance

OS environment: CentOS7

1)。 Install ipvsadm and keepalived software yum-y install ipvsadmyum-y install keepalived on 20 and 21

Start the keepalived service

Systemctl start keepalived.service

When using configuration LVS, you cannot directly configure ipvs in the kernel. You need to use ipvsadm, a management tool of ipvs, to manage it. A tool for viewing lvs forwarding and proxies.

2)。 Modify keepalived configuration file

The Master configuration file for 20 is as follows:

Vim / etc/keepalived/keepalived.conf

! Configuration File for keepalivedglobal_defs {notification_email {admin@localhost} notification_email_from root@localhostrouter_id LVS_01 # represents an identity of the running keepalived server, which displays the message on the subject of the message vrrp_skip_check_adv_addrvrrp_garp_interval 0vrrp_gna_interval 0} vrrp_instance VI_1 {state MASTER # specifies the role Masterinterface ens192 # network card of the keepalived, and ip a views the virtual_router_id 100 # virtual routing ID The settings of the active and standby nodes must be the same to indicate that each node belongs to the same VRRP group priority 100 # defined priority, and the priority of Master is higher than slaveadvert_int 1 # set the time interval for synchronization checks between MASTER and BACKUP load balancers Unit second authentication {# set authentication type and password auth_type PASS auth_pass 1111} virtual_ipaddress {10.20.1.9999 24 dev ens192 # set VIP}} virtual_server 10.20.1.99 443 {# set virtual server, need to specify virtual IP address and service port delay_loop 3 # health time check Lb_algo rr # load balancer scheduling algorithm wlc | rrlb_kind DR # sets LVS to implement the load balancing mechanism. There are three modes: NAT, TUN and DR. Persistence_timeout 50 # session duration (in seconds). This option is very useful for dynamic web pages protocol TCP # specify the forwarding protocol type, there are TCP and UDPreal_server 10.20.1.22 443 {# Real Server servers, here set the weight TCP_CHECK {# for Nginx server weight 1 # set the check mode, you can set HTTP_GET | SSL_GET connect_port 443 # detect Real Server's listening port connect_timeout 5 # timeout In seconds. If there is no return within this time, it indicates that one monitoring failed retry 3 # setting how many times the monitoring failed, then determine that the Real Server is dead delay_before_retry 3 # retry interval}} real_server 10.20.1.23 443 {weight 1 TCP_CHECK {connect_port 443 connect_timeout 5 retry 3 delay_before_retry 3}

For the / etc/keepalived/keepalived.conf configuration of slave 21, you need to change the state Backup and modify the priority whose value of priority is less than master.

View IP,VIP on the Master server on the Master.

The ipvsadm command views the rules.

3)。 Real Server configuration

Because of the VS/DR mode, the script is configured on the back-end Real Server, the Nginx server. Configure two Nginx servers to bind VIP addresses and ARP broadcasts for loopback addresses lo:0.

Vim / opt/scripts/lvs_rs.sh

#! / bin/bashvip=10.20.1.99ifconfig lo:0$ vip broadcast $vip netmask 255.255.255.255 uproute add-host $vip dev lo:0echo "1" > / proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" > / proc/sys/net/ipv4/conf/lo/arp_announceecho "1" > / proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" > / proc/sys/net/ipv4/conf/all/arp_announcesysctl-p & > / dev/null

It is executed on 22 and 23 servers respectively.

4)。 Test LVS+Keepalived High availability Cluster

In the client's browser, the web can be accessed normally through the VIP address drift of the LVS+Keepalived cluster (10.20.1.99) to verify that the cluster is built successfully.

First shut down the keepalived.service server on 20Master and check the IP situation:

Found out that VIP was gone.

Then check on 21Slave to see if VIP is drifting over:

Found that VIP has come to Slave.

At this point, Admiral 20Master will restart the keepalived.service service, and VIP will come back.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report