In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The following brings you how to build web high-availability load balancer based on LVS-DR+keepalived. I hope it can bring some help to you in practical application. Load balancing involves more things, there are not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.
Experimental environment redhat6.5 2.6.32-431.el6.x86_64
Keepalived-1.2.16 version
Ipvsadm-1.26-2.el6.x86_64
All virtual machines have firewalls turned off and selinux configured with local yum sources
The building requirement is to be familiar with the principle of LVS-DR mode, and first configure the environment required by LVS-DR. The environment here is to configure VIP DIP DIP on the same network segment and do arp suppression on the lo interface of realserver. See the following script for details
When keepalived and LVS are combined, note that there is no need to manually configure ipvsadm on director. Lvs's VIP,RIP is configured in the keepalived configuration file. All the configurations can only be found in the configuration file of keepalived. Before, the LVS-DR mode was done first, so the ipvsadm was configured first, and then the keepalived.conf was modified. As a result, the snake foot was drawn, resulting in the failure to build successfully.
The purpose of this experiment is to build a highly available load balance for web.
DR1 VIP:192.168.168.12 DIP:192.168.168.100
DR2 VIP: 192.168.168.12 DIP:192.168.168.107
Realserver1 VIP: 192.168.168.12 RIP1: 192.168.168.102
Realserver2 VIP: 192.168.168.12 RIP2:192.168.168.103
Prerequisite: realserver1 and realserver2 already have httpd installed
* set up DR1 first *
1. Installation of software ipvsadm and keepalived are installed on DR1 and DR2
# install ipvsadm Management lvs###
First, lsmod to see if the lvs module is supported.
Root@dr100 ~ # lsmod | grep-I ip
Ip_vs_rr 1420 1
Ip_vs 125220 3 ip_vs_rr
You can see the support and then go to the local yum source mount directory to install ipvsadm (LVS has been compiled into the kernel)
Root@dr100 ~ # cd / mnt/Packages/
Root@dr100 Packages # rpm-ivh ipvsadm-1.26-2.el6.x86_64.rpm
2. Install keepalived# # then install it
Root@dr100 src # tar-zxvf keepalived-1.2.16.tar.gz
Root@dr100 src # cd keepalived-1.2.16
Root@dr100 keepalived-1.2.16 #. / configure-with-kernel-dir=/usr/src/kernels/2.6.32-431.el6.x86_64/
There may be dependence.
Root@dr100 keepalived-1.2.16 # yum install-y openssl-devel
Root@dr100 keepalived-1.2.16 #. / configure-- with-kernel-dir=/usr/src/kernels/2.6.32-431.el6.x86_64/ integrates keepalived into the kernel
Root@dr100 keepalived-1.2.16 # make & & make install
Root@dr100 keepalived-1.2.16 # echo $?
0
Next, copy the configuration file of keepalived to the corresponding configuration file to facilitate configuration.
Root@dr100 keepalived-1.2.16 # cp / usr/local/etc/rc.d/init.d/keepalived / etc/init.d/
Root@dr100 keepalived-1.2.16 # cp / usr/local/etc/sysconfig/keepalived / etc/sysconfig/
Root@dr100 keepalived-1.2.16 # mkdir / etc/keepalived
Root@dr100 keepalived-1.2.16 # cp / usr/local/etc/keepalived/keepalived.conf / etc/keepalived/
Root@dr100 keepalived-1.2.16 # cp / usr/local/sbin/keepalived / usr/sbin/
Root@dr100 keepalived-1.2.16 # cd ~
# configure after installation #
Root@dr100 keepalived-1.2.16 # cd ~
Root@dr100 ~ # vim / etc/keepalived/keepalived.conf main configuration file
Root@dr100 ~ #
Root@dr100 ~ # cat! $
Cat / etc/keepalived/keepalived.conf
! Configuration File for keepalived
Global_defs {
Notification_email {
Acassen@firewall.loc # sets the alarm email address. You can set more than one, 1 per line.
Failover@firewall.loc # needs to turn on email alarm and local Sendmail service.
Sysadmin@firewall.loc
}
Notification_email_from Alexandre.Cassen@firewall.loc
Smtp_server 192.168.168.12 # set the SMTP Server address
Smtp_connect_timeout 30
Router_id LVS_DEVEL
}
# VRRP Instance#
Vrrp_instance VI_1 {
State MASTER # specifies the role of Keepalived, with MASTER as the host CVM and BACKUP as the standby CVM
Interface eth0 # BACKUP is a standby CVM
Virtual_router_id 51
Priority 100 # defines the priority, the higher the number, the higher the priority, and the primary DR must be greater than the standby DR.
Advert_int 1
Authentication {
Auth_type PASS # sets the verification type, mainly including PASS and AH
Auth_pass 1111 # set authentication password
}
Virtual_ipaddress {
192.168.168.12 # set the virtual IP address (virtual IP) of the primary DR, which can be set more than one per line.
}
}
# Virtual Server#
Virtual_server 192.168.168.12 80 {# Note that the IP address and port number are separated by a space
Delay_loop 6 # sets the time for health check (in seconds)
Lb_algo rr # sets the load scheduling algorithm, which defaults to rr, that is, polling algorithm. The best algorithm is wlc algorithm.
Lb_kind DR # sets LVS to implement LB mechanism. There are three modes: NAT, TUNN and DR.
Nat_mask 255.255.255.0
# persistence_timeout 1 # session duration (in seconds) the experiment should be commented out so that polling can be easily seen.
Protocol TCP # specifies the type of forwarding protocol, including TCP and UDP
Real_server 192.168.168.102 80 {# defines the ip address and port of realserver
Weight 1 # configures node weights. The higher the number, the higher the weights.
TCP_CHECK {
Connect_timeout 3 # indicates no response for 3 seconds, then timeout occurs
Nb_get_retry 3 # indicates the number of retries
Delay_before_retry 3 # indicates the retry interval
}
}
Real_server 192.168.168.103 80 {# configure server node, that is, public IP of Real Server2
Weight 1 # configures node weights. The higher the number, the higher the weights.
TCP_CHECK {
Connect_timeout 3 # indicates no response for 3 seconds, then timeout occurs
Nb_get_retry 3 # indicates the number of retries
Delay_before_retry 3 # indicates the retry interval
}
}
}
Configuration complete
Root@dr100 ~ # / etc/init.d/keepalived start
3. # DR1 configuration is complete and starts to configure realserver1 and realserver2#
Realserver1 configuration using script configuration
Root@realserver1 ~ # cat lvs.dr.sh
#! / bin/bash
Vip=192.168.168.12
/ etc/init.d/httpd start # if apache is not installed, please install it yourself.
Case $1 in
Start)
Echo 1 > / proc/sys/net/ipv4/conf/all/arp_ignore # these four lines are for arp suppression
Echo 2 > / proc/sys/net/ipv4/conf/all/arp_announce
Echo 1 > / proc/sys/net/ipv4/conf/lo/arp_ignore
Echo 2 > / proc/sys/net/ipv4/conf/lo/arp_announce
Ip addr add dev lo $vip
Ip a | grep-w inet
Echo "this is for start"
Stop)
Echo 0 > / proc/sys/net/ipv4/conf/all/arp_ignore # restore arp
Echo 0 > / proc/sys/net/ipv4/conf/all/arp_announce
Echo 0 > / proc/sys/net/ipv4/conf/lo/arp_ignore
Echo 0 > / proc/sys/net/ipv4/conf/lo/arp_announce
Ip addr del dev lo $vip
Ip a | grep-w inet
Echo "this is for stop"
*)
Echo "plesase start or stop"
Esac
Exit 0
Root@realserver1 ~ # sh lvs.dr.sh start
Root@realserver1 ~ # ip a
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Inet 127.0.0.1/8 scope host lo
Inet 192.168.168.12/32 scope global lo
Inet6:: 1/128 scope host
Valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:0c:29:0b:b0:4e brd ff:ff:ff:ff:ff:ff
Inet 192.168.168.102/24 brd 192.168.168.255 scope global eth0
Inet6 fe80::20c:29ff:fe0b:b04e/64 scope link
Valid_lft forever preferred_lft forever
Root@realserver1 ~ # echo "realserver 192.168.168.102" > / var/www/index.html
Realserver2 is the same.
Root@realserver2 ~ # cat lvs.dr.sh
#! / bin/bash
Vip=192.168.168.12
/ etc/init.d/httpd start
Case $1 in
Start)
Echo 1 > / proc/sys/net/ipv4/conf/all/arp_ignore
Echo 2 > / proc/sys/net/ipv4/conf/all/arp_announce
Echo 1 > / proc/sys/net/ipv4/conf/lo/arp_ignore
Echo 2 > / proc/sys/net/ipv4/conf/lo/arp_announce
Ip addr add dev lo $vip
Ip a | grep-w inet
Echo "this is for start"
Stop)
Echo 0 > / proc/sys/net/ipv4/conf/all/arp_ignore
Echo 0 > / proc/sys/net/ipv4/conf/all/arp_announce
Echo 0 > / proc/sys/net/ipv4/conf/lo/arp_ignore
Echo 0 > / proc/sys/net/ipv4/conf/lo/arp_announce
Ip addr del dev lo $vip
Ip a | grep-w inet
Echo "this is for stop"
*)
Echo "plesase start or stop"
Esac
Exit 0
Root@realserver2 ~ # sh lvs.dr.sh start
Root@realserver2 ~ # ip a
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Inet 127.0.0.1/8 scope host lo
Inet 192.168.168.12/32 scope global lo
Inet6:: 1/128 scope host
Valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:0c:29:2a:85:bb brd ff:ff:ff:ff:ff:ff
Inet 192.168.168.103/24 brd 192.168.168.255 scope global eth0
Inet6 fe80::20c:29ff:fe2a:85bb/64 scope link
Valid_lft forever preferred_lft forever
Root@realserver1 ~ # echo "realserver 192.168.168.103" > / var/www/index.html
4. # View in DR1 after realserver configuration is completed #
Root@dr100 ~ # ipvsadm-Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.168.12:80 rr
-> 192.168.168.102 Route 80 100
-> 192.168.168.103 Route 80 100
This indicates that the configuration is successful and the ipvsadm configuration can be saved in a file.
Root@dr100 ~ # ipvsadm-save > ipvsadm.txt is saved to the file ipvsadm.txt
Root@dr100 ~ # cat ipvsadm.txt
-A-t bogon:http-s rr
-a-t bogon:http-r bogon:http-g-w 1
-a-t bogon:http-r bogon:http-g-w 1
Root@dr100 ~ # ipvsadm-restore
< ipvsadm.txt 将文件导入到ipvsadm的设置中 5. ###################打开浏览器验证################# 打开浏览器访问VIP 192.168.168.12F5 Refresh
It shows that DR1 has been built successfully. Next, build DR2.
* * configuration of DR2 * *
Repeat the installation of ipvsadm and keepalived in DR1, so I won't say much here.
The difference between DR1 and DR2 is that the master in the configuration of keepalived..conf is changed to backup and then the priority is changed.
Root@dr107 ~ #
Root@dr107 ~ # cat! $
Cat / etc/keepalived/keepalived.conf
! Configuration File for keepalived
Global_defs {
Notification_email {
Acassen@firewall.loc # sets the alarm email address. You can set more than one, 1 per line.
Failover@firewall.loc # needs to turn on email alarm and local Sendmail service.
Sysadmin@firewall.loc
}
Notification_email_from Alexandre.Cassen@firewall.loc
Smtp_server 192.168.168.12 # set the SMTP Server address
Smtp_connect_timeout 30
Router_id LVS_DEVEL
}
# VRRP Instance#
Vrrp_instance VI_1 {
State BACKUP # specifies the role of Keepalived, with MASTER as the host server and BACKUP as the standby server
Interface eth0 # BACKUP is the standby server
Virtual_router_id 51
Priority 50 # defines priority. The higher the number, the higher the priority, and the primary DR must be greater than the standby DR.
Advert_int 1
Authentication {
Auth_type PASS # sets the verification type, mainly including PASS and AH
Auth_pass 1111 # sets the verification password master / slave to be consistent
}
Virtual_ipaddress {
192.168.168.12 # set the virtual IP address (virtual IP) of the primary DR, which can be set more than one per line.
}
}
# Virtual Server#
Virtual_server 192.168.168.12 80 {# Note that the IP address and port number are separated by a space
Delay_loop 6 # sets the time for health check (in seconds)
Lb_algo rr # sets the load scheduling algorithm, which defaults to rr, that is, polling algorithm. The best algorithm is wlc algorithm.
Lb_kind DR # sets LVS to implement LB mechanism. There are three modes: NAT, TUNN and DR.
Nat_mask 255.255.255.0
# persistence_timeout 1 # session duration (in seconds) the experiment should be commented out so that polling can be easily seen.
Protocol TCP # specifies the type of forwarding protocol, including TCP and UDP
Real_server 192.168.168.102 80 {
Weight 1 # configures node weights. The higher the number, the higher the weights.
TCP_CHECK {
Connect_timeout 3 # indicates no response for 3 seconds, then timeout occurs
Nb_get_retry 3 # indicates the number of retries
Delay_before_retry 3 # indicates the retry interval
}
}
Real_server 192.168.168.103 80 {# configure server node, that is, public IP of Real Server2
Weight 1 # configures node weights. The higher the number, the higher the weights.
TCP_CHECK {
Connect_timeout 3 # indicates no response for 3 seconds, then timeout occurs
Nb_get_retry 3 # indicates the number of retries
Delay_before_retry 3 # indicates the retry interval
}
}
}
Then turn on keepalived
Root@dr107 ~ # / etc/init.d/keepalived start
Root@dr107 ~ # ipvsadm-Ln View
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.168.12:80 rr
-> 192.168.168.102 Route 80 100
-> 192.168.168.103 Route 80 100
# it is all configured successfully and can be checked for downtime. # # #
* * theoretical knowledge found on the Internet below *
1. Introduction to LVS:
LVS is the abbreviation of Linux Virtual Server, which means Linux virtual server, which is a virtual server cluster system. This project, founded by Dr. Zhang Wensong in May 1998, is one of the earliest free software projects in China. Dr. Zhang Wensong currently works in Ali Group, mainly engaged in the research of cluster technology, operating system, object storage and database. (from BAIDU Encyclopedia)
2. Composition of LVS cluster:
The LVS server system consists of three parts:
1) load balancing layer:
It is located at the front end of the whole cluster system, and avoiding single point of failure generally consists of at least two or more load schedulers.
2) Server group layer:
It is composed of a set of machines that actually run application servers. Real Server can be one or N of Web, FTP, DNS, Mail, video and other servers, and each Real Server is connected by high-speed LAN/WAN. In order to save valuable resources, Director Server also plays the role of Real Server in a production environment!
3) shared storage layer:
Provide Real Server with shared storage space and a content-consistent storage area. It can be disk array, GFS file system of Red Hat, OCFS2 file system of Oracle, and so on.
Director Server is the core of the whole LVS! Up to now, Director server can only be installed on Linux and FreeBSD. If the kernel of Linux is 2.6 or above, each module of LVS has been built in and LVS function is supported without any setting.
Real Server server is almost all system platforms, such as: Windows, Linux, Solaris, AIX, BSD and other system platforms.
There is a fourth of the three common patterns.
1. NAT mode-network address translation
Virtualserver via Network address translation (VS/NAT)
This is through the method of network address translation to achieve scheduling. First of all, when the scheduler (LB) receives the request packet from the customer (the destination IP of the request is VIP), it decides which real back-end server (RS) to send the request to according to the scheduling algorithm. Then the scheduling changes the target IP address and port of the request packet sent by the client to the IP address (RIP) of the real server at the back end, so that the real server (RS) can receive the request packet from the customer. After the real server responds to the request, look at the default route (in NAT mode, we need to set the default route for RS to the LB server. After sending the response packet to LB,LB and receiving the response packet, change the source address of the packet to a virtual address (VIP) and send it back to the client.
It can be roughly divided into several steps
1) the client requests data, and the target IP is VIP
2) the request data arrives at the LB server, and LB modifies the destination address to the RIP address and the corresponding port according to the scheduling algorithm (this RIP address is obtained according to the scheduling algorithm. And record the join in the join HASH table
3) the packet arrives at the RS server webserver from the LB server, and then webserver responds. The gateway for Webserver must be LB and then return the data to the LB server.
4) after receiving the returned data from RS, modify the source address VIP& destination address CIP and the corresponding port 80. 0 according to the connection HASH table. The data then arrives at the client from LB.
5) what the client receives can only see the VIP\ DIP message.
Pros and cons of the NAT model:
1. NAT technology rewrites the address of both the request message and the response message through LB, so when the website traffic is relatively large, the LB load balancer has a big bottleneck, which generally requires up to 10-20 nodes.
2. You only need to configure a public network IP address on LB.
3. The gateway address of each internal node server must be the intranet address of the scheduler LB.
4. NAT mode supports translation of IP addresses and ports. That is, the port requested by the user can be inconsistent with the port of the real server.
2. TUN mode
Virtual server via ip tunneling mode: when using the NAT mode, because the request and response messages must be rewritten by the scheduler address, as the customer requests more and more, the scheduler processing capacity will become a bottleneck. To solve this problem, the scheduler forwards the requested message to the real server through the IP tunnel. The real server returns the processed data directly to the client. In this way, the scheduler only processes the request inbound message. Because the general network service response data is much larger than the request message, after adopting VS/TUN mode, the maximum throughput of the cluster system can be increased by 10 times.
The work of VS/TUN, which is different from the NAT mode, is that it transmits between LB and RS without rewriting the IP address. Instead, the customer request packet is encapsulated in an IP tunnel and then sent to the RS node server. After receiving it, the node server unlocks the IP tunnel and processes the response. And directly send the packet to the customer through its own external network address without going through the LB server.
1) the customer requests a packet, and the destination address VIP is sent to LB.
2) LB receives the customer request packet and encapsulates it with IP Tunnel. That is, add the IP Tunnel header to the original header. And send it out.
3) the RS node server receives the request packet according to the IP Tunnel header information (which is another logical invisible tunnel between LB and RS), then unlocks the IP Tunnel header information, gets the customer's request packet and processes the response.
4) after the response is processed, the RS server uses its own outgoing line to send the response packet to the client. The source IP address is still the VIP address. (the RS node server needs to configure VIP on the local loopback interface for arp suppression)
3. DR mode (direct routing mode)
Virtual server via direct routing (vs/dr)
DR mode sends the request to the real server by rewriting the target MAC address of the request message, and the processing result of the real server response is directly returned to the client user. Like TUN mode, DR mode can greatly improve the scalability of cluster system. Moreover, DR mode does not have the overhead of IP tunneling, and it is not necessary for real servers in the cluster to support IP tunneling protocol. However, it is required that both the scheduler LB and the real server RS have a network card connected to the same physical network segment and must be in the same LAN environment.
DR mode is a mode that is widely used on the Internet. A brief description of the principle and process of DR mode:
VS/DR mode works, its connection scheduling and management is the same as in NAT and TUN, and its packet forwarding method is different from the former two. DR mode routes the message directly to the target real server. In DR mode, the scheduler dynamically selects a server according to the load of each real server, the number of connections, etc., and does not modify the target IP address and target port, nor encapsulate the IP message, but changes the target MAC address of the data frame of the request message to the MAC address of the real server. The modified data frame is then sent on the local area network of the server group. Because the MAC address of the data frame is the MAC address of the real server and is on the same local area network. Then according to the communication principle of the local area network, the real reset must be able to receive the data packet sent by the LB. When the real server receives the request packet, it unlocks the IP packet header and finds that the target IP is VIP. (at this point, only your own IP matches the target IP, so we need to configure VIP on the local loopback excuse. In addition: since network interfaces will broadcast responses with ARP, but all other machines in the cluster have lo interfaces for this VIP, the responses will conflict. So we need to turn off the ARP response of the real server's lo interface. The real server then makes a request response, and then sends the response packet back to the customer based on its routing information, and the source IP address is still VIP.
Summary of DR mode:
1. Forwarding is achieved by modifying the destination MAC address of the packet on the scheduler LB. Notice that the source address is still CIP and the destination address is still VIP address.
2. The requested message goes through the scheduler, while the message processed by the RS response does not need to go through the scheduler LB, so it is very efficient to use when the concurrent traffic is large (compared with NAT mode)
3. Because DR mode is forwarded through MAC address rewriting mechanism, all RS nodes and scheduler LB can only be in one local area network.
4. RS hosts need to bind VIP addresses on the LO interface, and ARP suppression needs to be configured.
5. The default gateway of the RS node does not need to be configured as LB, but is directly configured as the gateway of the superior route, which allows the RS to go out of the network directly.
6. Because the scheduler in DR mode only rewrites the MAC address, the scheduler LB cannot rewrite the target port, so the RS server has to use the same port as VIP to provide services.
After reading the above about how to build web high-availability load balancer based on LVS-DR+keepalived, if there is anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineers for answers. Technical engineers have more than ten years of experience in the industry.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.