In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The following brings you the principle, mode analysis and working mode introduction of LVS-NAT load balancing cluster deployment, hoping to bring some help to you in practical application. Load balancing involves more things, there are not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.
Introduction to LVS
LVS (Linux Virtual Server), namely Linux virtual cloud server, is an open source load balancing project led by Dr. Zhang Wensong. Currently, LVS has been integrated into the Linux kernel module. The project implements an IP-based load balancing scheduling scheme for data requests in the Linux kernel. Its architecture is shown in figure 1. When the terminal Internet user accesses the company's external load balancing server from the outside, the end user's Web request will be sent to the LVS scheduler. According to its own preset algorithm, the scheduler decides to send the request to a back-end Web server, such as The polling algorithm can evenly distribute external requests to all servers in the backend. Although the end user accessing the LVS scheduler will be forwarded to the real server in the back end, if the real server connects to the same storage, the service provided is the same. No matter which real server the end user visits, the service content is the same, and the whole cluster is transparent to the users. Finally, according to the different working mode of LVS, the real server will choose different ways to send the data needed by users to the end user. LVS working mode is divided into NAT mode, TUN mode, and DR mode.
Understand the principle of load balancing cluster Cluster, cluster, cluster * * consists of multiple hosts, but only as a whole in Internet applications, as the site has higher and higher requirements for hardware performance, response speed, service stability and data reliability, a single server is unable to do so * * solution * * use expensive minicomputers. According to the target difference of the cluster, mainframe can be divided into three types: load balancing cluster, high performance computing cluster * * load balancing cluster (Load,Balance Cluster) * * to improve the response ability of the application system, handle as many access requests as possible, reduce latency and obtain high concurrency. Overall performance of high-load LB the load distribution of LB depends on the shunting algorithm of the master node * * High availability cluster * * to improve the reliability of the application system and reduce the outage time as much as possible, to ensure the continuity of services to achieve the fault-tolerant effect of high availability HA the working mode of HA includes duplex and master-slave mode * * high performance computing cluster * * to improve the CPU computing speed of the application system. With the goal of expanding hardware resources and analytical capabilities, the high-performance computing HPC capabilities of large supercomputers can be achieved. The high performance of high-performance computing clusters depends on distributed computing and parallel computing. Through dedicated hardware and software, resources such as CPU and memory of multiple servers are integrated together to achieve only large-scale computing. Load balancing Cluster working Mode Analysis and working Mode load balancing Cluster is the most widely used cluster type in enterprises at present. There are three working modes * * address translation * IP tunnel * Direct routing * * NAT mode (Network Address Translation) * * referred to as NAT mode * *, similar to the private network structure of the firewall. As the gateway of all server nodes, the load scheduler acts as the access entrance for the client and the access exit for each node to respond to the client. The server node uses a private IP address, which is located in a physical network with the load scheduler. The security is better than the other two modes: TUN mode (IP Tunnel) * * referred to as TUM mode * *. It adopts an open network structure, and the load scheduler only serves as the access entrance for the client. Each node responds to the client directly through its own network connection, instead of passing through the load scheduler server nodes scattered in different locations in the Internet, has an independent public network IP address, and communicates with the load scheduler through a private IP tunnel. All environments are public network environment DR mode (Direct Routing)
Direct routing
* * referred to as DR mode * *, adopts a semi-open network structure, which is similar to that of TUN mode, but each node is not scattered everywhere, but is located in the same physical network as the scheduler. The load scheduler connects with each node server through the local network. There is no need to establish a dedicated IP tunnel load scheduling algorithm for LVS in the local area network * * polling (Round Robin) * * the installation sequence of access requests received is distributed in turn to the nodes (real servers) in the cluster, and each server is treated equally. Regardless of the actual number of connections and system load * * weighted polling (Weighted Round Robin**) of the server, the scheduler can automatically query the load of each node according to the processing capacity of the real server. And dynamically adjust its weight to ensure that servers with strong processing capacity bear more access traffic * * minimum connections (Least Connections) * * allocate access requests according to the established connections of the real server, and give priority to the nodes with the least number of connections * * weighted least connections (Weighted Least Connections) * * when the performance of server nodes varies greatly. Can automatically adjust weights for real servers. Nodes with higher weights will bear a larger proportion of the active connection load. Let's do the experimental structure below: we need five virtual machines 1 as load balancer scheduler external network address: 12.0.0.1 Intranet: 192.168.200.14 website server apache node: 192.168.200.110 5 website server Node: 192.168.200.1206 nfs shared storage node: 192.168.200.1307 client 12.0.0.12 install ngs and remote call package [root@localhost ~] # yum install nfs-utils rpcbind-y4 first to 6 5 install web service [root@localhost ~] # yum install httpd-Y1 install LVS load balancing scheduler [root@localhost ~] # yum install ipvsadm-y configure 6nfs server only host mode our servers are all on a local area network
Configure fixed IP [root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33 # configure ens33 Nic BOOTPROTO=static # static DEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens33UUID=66a1e3d6-5c57-42ab-9996-b942b049ef85DEVICE=ens33ONBOOT=yesIPADDR=192.168.200.130 # IP address NETMASK=255.255.255.0 # Subnet Mask GATEWAY=192.168.200.1 # Gateway enable Service [root@localhost ~] # systemctl stop firewalld.service s [root@localhost ~] # Setenforce 0 [root@localhost ~] # systemctl start nfs.service [root@localhost ~] # systemctl status nfs.service ● nfs-server.service-NFS server and services Loaded: loaded (/ usr/lib/systemd/system/nfs-server.service Disabled Vendor preset: disabled) Active: active (exited) since II 2019-11-26 1 [root@localhost ~] # systemctl start rpcbind.service [root@localhost ~] # systemctl status rpcbind.service ● rpcbind.service-RPC bind service Loaded: loaded (/ usr/lib/systemd/system/rpcbind.service configure shared directory, give 777 permissions [root@localhost ~] # vim / etc/exports/usr/share * (ro,sync) # read-only Synchronized all servers can access / opt/accp 192.168.200.0apor24 (rw,sync) # shared to 200segments accessible, readable, writable, synchronized / opt/benet 192.168.200.0On24 (rw,sync) [root@localhost ~] # cd / opt/ [root@localhost opt] # mkdir benet accp [root@localhost opt] # chmod 777 accp/ benet/ # to read and write executable [root@localhost opt] # ls-l total usage 0drwxrwxrwx. 2 root root 6 November 26 17:13 accpdrwxrwxrwx. 2 root root 6 November 26 17:13 benetdrwxr-xr-x. 2 root root 6 March 26 2015 rh release sharing [root@localhost opt] # exportfs-rvexporting 192.168.200.0/24:/opt/benetexporting 192.168.200.0/24:/opt/accpexporting *: / usr/share4,web server host mode only, configure fixed IP
[root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens33UUID=c3f0a196-6819-4702-9b54-7cad18402591DEVICE=ens33ONBOOT=yesIPADDR=192.168.200.110NETMASK=255.255.255.0GATEWAY=192.168.200.1 enable service To test whether you can connect with 6nfs server [root@localhost ~] # systemctl restart network [root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce 0 [root@localhost ~] # systemctl start httpd.service [root@localhost ~] # netstat-ntap | grep 80tcp6 00:: 80: * LISTEN 100863/httpd [root@localhost ~] # ping 192 .168.200.130PING 192.168.200.130 (192.168.200.130) 56 (84) bytes of data.64 bytes from 192.168.200.130: icmp_seq=1 ttl=64 time=0.724 ms64 bytes from 192.168.200.130: icmp_seq=2 ttl=64 time=0.356 ms mount Write a file in web4 Test whether the 6nfs server stores the file [root@localhost ~] # showmount-e 192.168.200.130 # Test 6 server's shared directory Export list for 192.168.200.130:/usr/share * / opt/benet 192.168.200.0/24/opt/accp 192.168.200.0and24 [root@localhost ~] # mount.nfs 192.168.200.130:/opt/accp / var/www/html/ # mount [ Root@localhost ~] # df-h system capacity used available mount point / dev/mapper/centos-root 20G 3.4G 17G 17% / devtmpfs 897M 0897M 0% / devtmpfs 912M 0912m 0% / dev/shmtmpfs 912M 9.6M 903m 2% / runtmpfs 912M 0912M 0% / sys/fs/cgroup/dev/sda1 6.0G 179M 5.9G 3% / boot/dev/mapper/centos-home 10G 37M 10G 1% / hometmpfs 183M 40K 183M 1% / run/user/0tmpfs 183M 4.0K 183M 1% / Run/user/42192.168.200.130:/opt/accp 20G 3.8G 17G 19% / var/www/html [root@localhost ~] # cd / var/www/html/ [root@localhost html] # echo "THII IS ACCP WEB" > index.html#6 server to test [root@localhost opt] # cd accp/ [root@localhost accp] # lsindex.html5web server is the same operation Host-only mode, binding fixed IP
[root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens33UUID=a6cf69fe-eb42-4a99-9239-0da4cdeae0c7DEVICE=ens33ONBOOT=yesIPADDR=192.168.200.120NATMASK=255.255.255.0GATEWAT=192.168.200.1 [root@localhost ~] # systemctl restart network [root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce 0 mount Write a file in web4 Test whether the 6nfs server stores this file [root@localhost ~] # mount.nfs 192.168.200.130:/opt/benet / var/www/html/ [root@localhost ~] # df-h question system capacity is available available mount point / dev/mapper/centos-root 20G 4.3G 16G 22% / devtmpfs 897M 0897M 0% / devtmpfs 912M 0 912M 0% / dev/shmtmpfs 912M 9.5m 903M 2% / runtmpfs 912M 0912M 0% / sys/fs/cgroup/dev/sda1 6.0G 179M 5.9G 3% / boot/dev/mapper/centos-home 10G 36M 10G 1 / hometmpfs 183M 44K 183M 1% / run/user/0192.168.200.130:/opt/benet 20G 3.8G 17G 19% / var/www/html [root@localhost ~] # cd / var/www/html/ [root@localhost html] # echo "this is benet web" > index.html [root@localhost html] # systemctl start httpd.service#6 to test [root@localhost accp] # cd.. / [ Root@localhost opt] # cd benet/ [root@localhost benet] # lsindex.html1 load balancing scheduler host-only mode Bind two network cards and configure the network card
TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens36DEVICE=ens36ONBOOT=yesIPADDR=12.0.0.1NETMASK=255.255.255.0BOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens33UUID=849aa04e-1874-490f-8cb0-b2fde4b9a6f8DEVICE=ens33ONBOOT=yesIPADDR=192.168.200.1NETMASK=255.255.255.0 [root@localhost ~] # systemctl restart network # restart the network service and enable routing forwarding NAT conversion vim / etc/sysctl.conf # modify configuration file net.ipv4.ip_forward=1 # add a [root@localhost ~] # sysctl-p # effective route forward net.ipv4.ip_forward=1 [root@localhost ~] # iptables-t nat-F # clear NAT table [root@localhost ~] # iptables-F # clear forwarding table [root@localhost ~] # iptables-t nat-A POSTROUTING-o ens36-s 192.168.200.0max 24-j SNAT- -to-source 12.0.0.in the nat table-An in the POSTROUTING column-o specify exit-s specify source address-j do nat conversion to 12.0.0.1 to 7 client binding IP
Go to the 4web server to test whether you can connect with the client [root@localhost html] # ping 12.0.0.12PING 12.0.0.12 (12.0.0.12) 56 (84) bytes of data.64 bytes from 12.0.0.12: icmp_seq=1 ttl=127 time=0.815 ms64 bytes from 12.0.0.12: icmp_seq=2 ttl=127 time=0.752 ms64 bytes from 12.0.0.12: icmp_seq=3 ttl=127 time=0.727 ms64 bytes from 12.0.0.12 Icmp_seq=4 ttl=127 time=0.712 ms loads LVS kernel module [root@localhost ~] # modprobe ip_ vs [root @ localhost ~] # cat / proc/net/ip_vs # View LVS kernel version IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn Open LVS kernel first save [root@localhost ~] # ipvsadm-- save > / etc/sysconfig/ipvsadm [root@localhost ~] # systemctl start ipvsadm.service [root@ Localhost ~] # systemctl status ipvsadm.service ● ipvsadm.service-Initialise the Linux Virtual Server Loaded: loaded (/ usr/lib/systemd/system/ipvsadm.service Disabled; vendor preset: disabled) Active: active (exited) since II 2019-11-26 17:59:41 CST 9s ago writes the LVS virtual server script, and starts the script [root@localhost ~] # cd / opt/ [root@localhost opt] # vim nat.shroud server bins, bashipvsadm-C # clears the cache ipvsadm-A-t 12.0.0.1 root@localhost 80-s server # add virtual server -A-t designated port-s designated algorithm polling ipvsadm-a-t 12.0.0.1mipvsadm 80-r 192.168.200.110 mipvsadm 80-m #-a-t designated port-r specified real node server-m specified NATipvsadm-a-t 12.0.0.1source nat.sh 80-r 192.168.200.12080-mipvsadm [root@localhost opt] # source nat.sh # startup script .1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP localhost.localdomain:http rr-> 192.168.200.110:http Masq 100-> 192.168.200.120:http Masq 100 NAT address Translation [root@localhost opt] # iptables-F # clear forwarding Table [root@localhost opt] # iptables-t nat-F # clears the forwarding table [root@localhost opt] # iptables-t nat-A POSTROUTING-o ens36-s 192.168.200.0 SNAT 24-j SNAT-- to-source 12.0.0.in the nat table-An in the POSTROUTING column-o specify the exit-s specified source address-j do the nat conversion to 12.0.0.1 and then go to the client to test whether the contents of the web server can be accessed through the public network address
After reading the above introduction of the principle, mode analysis and working mode of LVS-NAT load balancing cluster deployment, if there is anything else you need to know, you can find what you are interested in in the industry information or find our professional and technical engineer to answer, the technical engineer has more than ten years of experience in the industry.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.