In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you the principle and installation configuration of how to use LVS to achieve load balancing. The article is rich in content and analyzes and describes it from a professional point of view. I hope you can get something after reading this article.
Load balancer cluster is the abbreviation of load balance cluster, which translates into Chinese and means load balancer cluster. Commonly used load balancing open source software are nginx, lvs, haproxy, commercial hardware load balancing equipment F5, Netscale. The main purpose here is to learn LVS and make a detailed summary and record.
1. Basic introduction of load balancing LVS
The architecture and principle of LB cluster is very simple, that is, when a user's request comes, it will be distributed directly to the Director Server, and then it will intelligently and evenly distribute the user's request to the real back-end server (real server) according to the set scheduling algorithm. In order to avoid different data requested by users on different machines, shared storage is needed to ensure that the data requested by all users is the same.
LVS is the abbreviation of Linux Virtual Server, which means Linux virtual server. This is an open source project initiated by Dr. Zhang Wensong, and its official website is http://www.linuxvirtualserver.org. LVS is now part of the Linux kernel standard. The technical goal that can be achieved by using LVS is to achieve a high-performance and high-availability Linux server cluster through LVS load balancing technology and Linux operating system, which has good reliability, scalability and maneuverability. As a result, the optimal performance can be achieved at low cost. LVS is an open source software project to achieve load balancing cluster. LVS architecture can be logically divided into scheduling layer, Server cluster layer and shared storage.
Second, the basic working principle of LVS
Process description:
1. When a user initiates a request to the load balancer scheduler (Director Server), the scheduler sends the request to kernel space
The 2.PREROUTING chain first receives the user request, determines that the target IP is the native IP, and sends the packet to the INPUT chain.
3.IPVS works on the INPUT chain. When the user request reaches INPUT, IPVS will compare the user request with the defined cluster service. If the user requests the defined cluster service, then IPVS will forcibly modify the destination IP address and port in the packet and send the new packet to the POSTROUTING chain.
After receiving the packet, the 4.POSTROUTING chain finds that the destination IP address happens to be its own back-end server, so the packet is finally sent to the back-end server by routing.
III. The composition of LVS
LVS consists of two programs, including ipvs and ipvsadm.
1. Ipvs (ip virtual server): a piece of code that works in kernel space, called ipvs, is the code that actually works to implement scheduling.
2. Ipvsadm: the other section works in user space, called ipvsadm, which is responsible for writing rules for the ipvs kernel framework to define who is the cluster service and who is the real back-end server (Real Server).
IV. Terms related to LVS
1. DS:Director Server . Refers to the front-end load balancer node.
2. RS:Real Server . The real working server at the back end.
3. VIP: the IP address that directly faces the user's request from the outside and is the target of the user's request.
4. DIP:Director Server IP, the IP address that is mainly used to communicate with internal hosts.
5. RIP:Real Server IP, the IP address of the backend server.
6. CIP:Client IP, access the IP address of the client.
The following is a summary of the principles and characteristics of the three working modes.
V. principle and characteristics of LVS/NAT
1. Focus on understanding the implementation principle of NAT and the change of data packets.
Process description:
(a). When the user request arrives at Director Server, the requested Datagram will first go to the PREROUTING chain in kernel space. In this case, the source IP of the message is CIP, and the destination IP is VIP
(B). PREROUTING check found that the destination IP of the packet is native, and the packet is sent to the INPUT chain
(C). IPVS compares whether the service requested by the packet is a cluster service. If so, modify the destination IP address of the packet to the back-end server IP, and then send the packet to the POSTROUTING chain. In this case, the source IP of the message is CIP, and the destination IP is RIP
(d). The POSTROUTING chain sends packets to Real Server by routing
(e). Real Server compares and discovers that the target is its own IP and starts to build a response message and send it back to Director Server. In this case, the source IP of the message is RIP, and the destination IP is CIP
(F). Before Director Server responds to the client, it modifies the source IP address to its own VIP address and then responds to the client. In this case, the source IP of the message is VIP, and the destination IP is CIP
2. Characteristics of LVS-NAT model
(1) RS should use a private address, and the gateway of RS must point to DIP
(2) DIP and RIP must be in the same network segment
(3) both request and response messages need to go through Director Server. In high load scenarios, Director Server can easily become a performance bottleneck.
(4) Port mapping is supported
(5) RS can use any operating system
Defect: the pressure on Director Server will be great, and both request and response have to go through director server.
VI. Principle and characteristics of LVS/DR
1. Re-set the destination MAC address of the request message to the MAC address of the selected RS
Process description:
(a) when the user request arrives at Director Server, the requested Datagram will first go to the PREROUTING chain in kernel space. In this case, the source IP of the message is CIP, and the destination IP is VIP
(B) PREROUTING inspection reveals that the destination IP of the packet is native, and the packet is sent to the INPUT chain
(C) whether the service requested by the IPVS comparison packet is a cluster service, if yes, modify the source MAC address in the request message to the MAC address of the DIP, modify the destination MAC address to the MAC address of the RIP, and then send the packet to the POSTROUTING chain. At this time, the source IP and destination IP are not modified, only the MAC address of the source MAC address is DIP, and the destination MAC address is the MAC address of RIP.
(d) because DS and RS are in the same network, they are transmitted through layer 2. The POSTROUTING chain checks that the destination MAC address is the MAC address of RIP, and the packet will be sent to Real Server.
(e) RS receives the request message when it discovers that the MAC address of the request message is its own MAC address. After the processing is completed, the response message is transmitted to the eth0 network card through the lo interface and then sent out. In this case, the source IP address is VIP and the destination IP is CIP
(F) the response message is finally delivered to the client
2. Characteristics of LVS-DR model
Feature 1: ensure that the front-end route sends all messages with the destination address of VIP to Director Server instead of RS
(2) RS can use a private address or a public network address. If a public network address is used, the RIP can be accessed directly through the Internet.
(3) RS and Director Server must be in the same physical network
(4) all request messages go through Director Server, but response messages must not go through Director Server.
(5) address translation and port mapping are not supported.
(6) RS can be the most common operating system.
(7) the gateway of RS must not point to DIP (because we do not allow him to pass through director)
(8) the lo interface on RS configures the IP address of VIP
Defect: RS and DS must be in the same computer room
3. Feature 1 solution:
Do static address routing binding on the front-end router to route the address for VIP only to Director Server
There is a problem: the user may not have the right to operate routing, because it may be provided by the operator, so this method may not be practical
Arptables: implement firewall rules in ARP parsing at the arp level to filter RS responses to ARP requests. This is provided by iptables.
Modify the kernel parameters on RS (arp_ignore and arp_announce) to configure VIP on RS on the alias of the lo interface and restrict it from responding to requests for VIP address resolution.
VII. Principle and characteristics of LVS/Tun
Another layer of IP header is encapsulated outside the original IP message, the inner IP header (source address is CIP, destination IP is VIP), and outer IP header (source address is DIP, destination IP is RIP)
Process description:
(a) when the user request arrives at Director Server, the requested Datagram will first go to the PREROUTING chain in kernel space. In this case, the source IP of the message is CIP, and the destination IP is VIP.
(B) PREROUTING inspection reveals that the destination IP of the packet is native, and the packet is sent to the INPUT chain
(C) whether the service requested by IPVS is a cluster service. If so, a layer of IP message is encapsulated in the header of the request message. The source IP is DIP and the destination IP is RIP. Then send it to the POSTROUTING chain. The source IP is DIP and the destination IP is RIP
(d) the POSTROUTING chain sends the packet to RS based on the latest encapsulated IP message (because there is an extra layer of IP header encapsulated in the outer layer, it can be understood to be transmitted through a tunnel at this time). The source IP is DIP and the destination IP is RIP
(e) after RS receives the message and finds that it is its own IP address, it receives the message. After removing the outermost IP, it will find that there is also a layer of IP header inside, and the target is its own lo interface VIP. Then RS begins to process the request. After the processing is completed, it is sent to the eth0 network card through the lo interface, and then transmitted to the outside. In this case, the source IP address is VIP and the destination IP is CIP
(F) the response message is finally delivered to the client
1. Characteristics of LVS-Tun model
(1) RIP, VIP and DIP are all public network addresses
(2) the gateway of RS will not and cannot point to DIP.
(3) all request messages go through Director Server, but response messages must not go through Director Server.
(4) Port mapping is not supported
(5) the RS system must support tunneling.
In fact, the most commonly used way in enterprises is DR, while the configuration of NAT is relatively simple and convenient. Later in practice, we will summarize the specific configuration process of DR and NAT.
Eight scheduling algorithms for LVS
1. Round-robin scheduling rr
This algorithm is the simplest, which is to schedule requests to different servers in turn. The most important feature of this algorithm is simplicity. The polling algorithm assumes that all servers have the same ability to handle requests, and the scheduler distributes all requests equally to each real server, regardless of back-end RS configuration and processing power.
two。 Weighted round robin wrr
This algorithm has one more concept of weight than rr's algorithm, and can set the weight to RS. The higher the weight, the more requests are distributed, and the value of the weight ranges from 0 to 100. It is mainly an optimization and supplement to the rr algorithm. LVS will consider the performance of each server and add weights to each server. If the weight of server An is 1 and the weight of server B is 2, the request dispatched to server B will be twice as much as server A. The higher the weight of the server, the more requests are processed.
3. At least link lc
This algorithm decides who to distribute the request to based on the number of connections in the back-end RS. For example, if the number of RS1 connections is less than the number of RS2 connections, then the request will be sent to RS1 first.
4. Weighted minimum link wlc
This algorithm has one more concept of weight than lc.
5. Least connection scheduling algorithm lblc based on locality
This algorithm is a scheduling algorithm for the target IP address of the request packet. The algorithm first finds all the servers used by the nearest target IP address according to the requested target IP address. If the server is still available and has the ability to process the request, the scheduler will try to choose the same server, otherwise it will continue to choose other feasible servers.
6. Complex join algorithm lblcr based on least locality
Instead of recording the connection between the target IP and a server, it maintains a mapping between a target IP and a set of servers to prevent a single point of server from being overloaded.
7. Destination address hash scheduling algorithm dh
According to the target IP address, the algorithm establishes the mapping relationship between the target IP and the server through the hash function. When the server is unavailable or the load is too high, the request to the target IP will be sent to the server.
8. Source address hash scheduling algorithm sh
It is similar to the destination address hash scheduling algorithm, but it statically allocates fixed server resources according to the source address hash algorithm.
IX. Practice the NAT model of LVS
1. Experimental environment
Three servers, one as director and two as real server,director have one external network card (172.16.254.200) and one intranet ip (192.168.0.8). Only the intranet ip (192.168.0.18) and (192.168.0.28) are on the two real server, and the intranet gateways of the two real server need to be set to the intranet ip of director (192.168.0.8).
2. Installation and configuration
The nginx service is installed on both real server
# yum install-y nginx
Install ipvsadm on Director
# yum install-y ipvsadm
Edit the nat implementation script on Director
# vim / usr/local/sbin/lvs_nat.sh
# Editing writes the following:
#! / bin/bash
# enable route forwarding on the director server:
Echo 1 > / proc/sys/net/ipv4/ip_forward
# turn off the redirection of icmp
Echo 0 > / proc/sys/net/ipv4/conf/all/send_redirects
Echo 0 > / proc/sys/net/ipv4/conf/default/send_redirects
Echo 0 > / proc/sys/net/ipv4/conf/eth0/send_redirects
Echo 0 > / proc/sys/net/ipv4/conf/eth2/send_redirects
# director set nat Firewall
Iptables-t nat-F
Iptables-t nat-X
Iptables-t nat-A POSTROUTING-s 192.168.0.0 Universe 24-j MASQUERADE
# director sets ipvsadm
IPVSADM='/sbin/ipvsadm'
$IPVSADM-C
$IPVSADM-A-t 172.16.254.200 IPVSADM 80-s wrr
$IPVSADM-a-t 172.16.254.200 purl 80-r 192.168.0.18 purl 80-m-w 1
$IPVSADM-a-t 172.16.254.200 purl 80-r 192.168.0.28 purl 80-m-w 1
After saving, run this script directly on Director to complete the configuration of lvs/nat.
/ bin/bash / usr/local/sbin/lvs_nat.sh
View the rules set by ipvsadm
Ipvsadm-ln
3. Test the effect of LVS
Test the web content http://172.16.254.200 on the two machines through a browser. To distinguish, we can modify the default page of nginx:
Execute on RS1
# echo "rs1rs1" > / usr/share/nginx/html/index.html
Execute on RS2
# echo "rs2rs2" > / usr/share/nginx/html/index.html
Note that be sure to set the IP of the gateway to the intranet IP of director on both RS.
10. Practice the DR model of LVS
1. Experimental environment
Three machines:
Director node: (eth0 192.168.0.8 vip eth0:0 192.168.0.38)
Real server1: (eth0 192.168.0.18 vip lo:0 192.168.0.38)
Real server2: (eth0 192.168.0.28 vip lo:0 192.168.0.38)
2. Installation
The nginx service is installed on both real server
# yum install-y nginx
Install ipvsadm on Director
# yum install-y ipvsadm
3. Configure script on Director
# vim / usr/local/sbin/lvs_dr.sh
#! / bin/bash
Echo 1 > / proc/sys/net/ipv4/ip_forward
Ipv=/sbin/ipvsadm
Vip=192.168.0.38
Rs1=192.168.0.18
Rs2=192.168.0.28
Ifconfig eth0:0 down
Ifconfig eth0:0$ vip broadcast $vip netmask 255.255.255.255 up
Route add-host $vip dev eth0:0
$ipv-C
$ipv-A-t $vip:80-s wrr
$ipv-a-t $vip:80-r $rs1:80-g-w 3
$ipv-a-t $vip:80-r $rs2:80-g-w 1
Execute the script:
# bash / usr/local/sbin/lvs_dr.sh
4. Configure the script on 2 rs:
# vim / usr/local/sbin/lvs_dr_rs.sh
#! / bin/bash
Vip=192.168.0.38
Ifconfig lo:0$ vip broadcast $vip netmask 255.255.255.255 up
Route add-host $vip lo:0
Echo "1" > / proc/sys/net/ipv4/conf/lo/arp_ignore
Echo "2" > / proc/sys/net/ipv4/conf/lo/arp_announce
Echo "1" > / proc/sys/net/ipv4/conf/all/arp_ignore
Echo "2" > / proc/sys/net/ipv4/conf/all/arp_announce
Execute the script on rs:
Bash / usr/local/sbin/lvs_dr_rs.sh
5. Experimental test
The test method is the same as above, and the browser accesses http://192.168.0.38
Note: in DR mode, the gateway of the 2 rs nodes does not need to be set to the IP of the dir node.
Reference link address:
Http://www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.html
11. LVS with keepalive
LVS can achieve load balancing, but cannot perform health check. For example, if a rs fails, LVS will still forward the request to the failed rs server, which will lead to invalid requests. Keepalive software can carry out health check, and can achieve the high availability of LVS at the same time, solve the problem of LVS single point of failure, in fact, keepalive is born for LVS.
1. Experimental environment
4 nodes
Keepalived1 + lvs1 (Director1): 192.168.0.48
Keepalived2 + lvs2 (Director2): 192.168.0.58
Real server1:192.168.0.18
Real server2:192.168.0.28
IP: 192.168.0.38
2. Install the system software
2-node installation of Lvs + keepalived
# yum install ipvsadm keepalived-y
2-node installation of Real server + nginx service
# yum install epel-release-y
# yum install nginx-y
3. Set the configuration script
Real server Node 2 configuration script:
# vim / usr/local/sbin/lvs_dr_rs.sh
#! / bin/bash
Vip=192.168.0.38
Ifconfig lo:0$ vip broadcast $vip netmask 255.255.255.255 up
Route add-host $vip lo:0
Echo "1" > / proc/sys/net/ipv4/conf/lo/arp_ignore
Echo "2" > / proc/sys/net/ipv4/conf/lo/arp_announce
Echo "1" > / proc/sys/net/ipv4/conf/all/arp_ignore
Echo "2" > / proc/sys/net/ipv4/conf/all/arp_announce
Execute the script on the 2-node rs:
Bash / usr/local/sbin/lvs_dr_rs.sh
Keepalived node configuration (2 nodes):
Master Node (MASTER) profile
Vim / etc/keepalived/keepalived.conf
Vrrp_instance VI_1 {
State MASTER
Interface eth0
Virtual_router_id 51
Priority 100
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Virtual_ipaddress {
192.168.0.38
}
}
Virtual_server 192.168.0.38 80 {
Delay_loop 6
Lb_algo rr
Lb_kind DR
Persistence_timeout 0
Protocol TCP
Real_server 192.168.0.18 80 {
Weight 1
TCP_CHECK {
Connect_timeout 10
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
Real_server 192.168.0.28 80 {
Weight 1
TCP_CHECK {
Connect_timeout 10
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
}
Slave Node (BACKUP) profile
Copy the configuration file keepalived.conf of the primary node, and then modify the following:
State MASTER-> state BACKUP
Priority 100-> priority 90
The 2 nodes of keepalived execute the following command to enable the forwarding function:
# echo 1 > / proc/sys/net/ipv4/ip_forward
4. Start keepalive
Start keepalive first, master then slave respectively
Service keepalived start
5. Verification result
Experiment 1
Manually shut down the nginx,service nginx stop of the 192.168.0.18 node and test on the client that the access to the http://192.168.0.38 is normal. There is no access to the 18 nodes, but the contents of the 28 nodes have been accessed all the time.
Experiment 2
Manually restart the nginx of the 192.168.0.18 node, and service nginx start tests that the access to the http://192.168.0.38 on the client is normal, and accesses 18 nodes and 28 nodes according to the rr scheduling algorithm.
Experiment 3
To test the HA feature of keepalived, first execute the command ip addr on master, and you can see that 38's vip is on the master node; at this time, if you execute the service keepalived stop command on master, the vip is no longer on the master, and when you execute the ip addr command on the slave node, you can see that vip has been correctly drifted to the slave node, and the client access to the slave node is still normal, verifying the HA characteristics of keepalived.
The above is the principle and installation configuration of how to use LVS to achieve load balancing. If you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.