In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The following brings you a summary of LVS load balancing cluster, hoping to give you some help in practical application. Load balancing involves a lot of things, there are not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.
Cluster Application Overview the meaning of cluster 1.Cluster, cluster, cluster 2. It is composed of multiple hosts, but it is only shown as a whole in Internet applications. As the site has higher and higher requirements for hardware performance, response speed, service stability, data reliability and so on, a single cloud server is unable to solve the problem. Use expensive minicomputers and mainframes 2. Using ordinary servers to build service cluster enterprise cluster classification
Depending on the target differences targeted by the cluster, it can be divided into three types:
1. Load balancing cluster (polling, weighted weight of minimum connection) 2. Highly available clusters (speed of access, reliability) 3. High performance computing cluster (concurrent processing tasks)
Load balancing Cluster (Load Balance Cluster)
1. With the goal of improving the responsiveness of the application system, handling as many access requests as possible, and reducing latency, the overall performance of 2.LB with high concurrency and load (LB) depends on the shunt algorithm of the master node.
High availability Cluster (High Availability Cluster)
1. With the goal of improving the reliability of the application system and reducing the downtime as much as possible, ensuring the continuity of services and achieving the fault tolerance effect of high availability (HA), the working mode of 2.HA includes duplex and master-slave modes.
High performance Computing Cluster (High Performance Computer Cluster)
1. With the goal of improving the CPU operation speed, expanding hardware resources and analysis ability of the application system, the high performance computing (HPC) capability equivalent to that of large and supercomputers is obtained. The high performance of high-performance computing clusters depends on "distributed computing" and "parallel computing". CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to realize the computing power load balancing cluster work mode that only large and supercomputers have. Load balancing cluster is the most widely used cluster type in enterprises at present.
The load scheduling technology of the cluster has three working modes:
1. Address Translation 2.IP Tunnel 3. Direct routing (DR) NAT mode
Address Translation (Network Address Translation):
1. Referred to as NAT mode, similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client, but also the access exit of each node to respond to the client. The server node uses a private IP address, which is located on the same physical network as the load scheduler, and its security is better than the other two TUN modes.
IP Tunnel (IP Tunnel):
1. Referred to as TUN mode, using an open network structure, the load scheduler only acts as the access entrance for the client, and each node responds directly to the client through its own Internet connection, instead of going through the load scheduler 2. The server nodes are scattered in different locations in the Internet, have independent public network IP addresses, and communicate with the load scheduler through a dedicated IP tunnel in DR mode.
Direct routing (Direct Routing)
1. Referred to as DR mode, it adopts a semi-open network structure, which is similar to that of TUN mode, but each node is not scattered in different places, but on the same physical network as the scheduler. The load scheduler connects with each node server through the local network, so there is no need to establish a dedicated IP tunnel load balancing cluster architecture.
The structure of load balancing:
1. The first layer, load scheduler (Load Balancer or Director) 2. The second layer, server pool (Server Pool) 3. The third layer, load scheduling algorithm for shared memory (Share Storage) LVS
1. Polling (Round Robin):
1. The received access requests are assigned sequentially to each node in the cluster (real server) 2. Treat each server equally, regardless of the actual number of connections and system load
two。 Weighted polling (Weighted Round Robin):
1. According to the processing capacity of the real server, the scheduler can automatically query the load of each node and dynamically adjust its weight 2. Ensure that servers with strong processing power bear more access traffic
3. Minimum connection (Least Connections)
According to the number of connections established by the real server, the access requests received are given priority to the node with the least number of connections.
4. Weighted least connection (Weighted Least Connections)
1. In the case of great differences in the performance of server nodes, the weight can be adjusted automatically for real servers. Nodes with higher weights will bear a greater proportion of the active connection load using the ipvsadm tool LVS cluster creation and management steps
NFS shared storage service
Network File System, network file system
1. Dependent on RPC (remote procedure call) 2. Need to install nfs-utils, rpcbind software package 3. System services: nfs, rpcbind4. Shared profile: / etc/exports
Access NFS shared resources in the client
1. Install the rpcbind package and start the rpcbind service 2. Manual mount of NFS shared directory 3.fstab automatic mount setup experimental environment deployment CentOS 7-1: scheduler and gateway (two network cards) external network (ens36): 12.0.0.1 internal network (ens33): 192.168.200.1CentOS 7-2: website server (Apache) 192.168.200.110CentOS 7-3: website server (Apache) 192.168.200.120CentOS 7-4: shared storage server 192.168.200.130Windows 7: client 12.0.0.12 first step: in networked state Install the service package on each server
1. Install ipvsadm management tools on the scheduler server
# first add a network adapter to become two network cards # install ipvsadm management tools [root@localhost ~] # yum install ipvsadm-y
two。 Operations on two web node servers
# install httpd service [root@localhost ~] # yum install httpd-y
3. Operations on shared storage servers
# use rpm to query whether there are nfs-utils and rpcbind packages [root@localhost ~] # rpm-Q nfs-utilsnfs-utils-1.3.0-0.48.el7.x86_64 [root@localhost ~] # rpm-Q rpcbind rpcbind-0.2.0-42.el7.x86_64 step 2: configure a shared storage server # modify ens33 network card configuration [root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO= "static" # change dhcp to staticDEVICE= "ens33" ONBOOT= "yes" IPADDR=192.168.200.130 # append: IP address under the last line Subnet mask and gateway NETMASK=255.255.255.0GATEWAY=192.168.200.1# restart network services [root@localhost ~] # systemctl network restart # turn off firewall and security features [root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce "start nfs sharing service [root@localhost ~] # systemctl start nfs.service [root@localhost ~] # systemctl start rpcbind.service# edit shared directory configuration file [root@localhost ~] # vim / etc/exports# writes a shared directory entry And grant read and write permissions / usr/share * (ro,sync) / opt/accp 192.168.200.0Universe 24 (rw,sync) / opt/kgc 192.168.200.0Universe 24 (rw) Sync) [root@localhost ~] # cd / opt/ [root@localhost opt] # mkdir kgc accp [root@localhost opt] # chmod 777 kgc/ accp/ # enhance directory permissions [root@localhost opt] # exportfs-rv # publish shared directory exporting 192.168.200.0/24:/opt/kgcexporting 192.168.200.0/24:/opt/accpexporting *: / usr/share step 3: configure Web1 node server # modify ens33 network card configuration Set [root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO= "static" # change dhcp to staticDEVICE= "ens33" ONBOOT= "yes" IPADDR=192.168.200.110 # append IP address Subnet mask and gateway NETMASK=255.255.255.0GATEWAY=192.168.200.1# restart service [root@localhost ~] # systemctl network restart # disable firewall and security features [root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce 0 [root@localhost ~] # systemctl start httpd.service [root@localhost ~] # netstat-ntap | grep 80 tcp6 00:: 80:: * LISTEN 7315/httpd [root@localhost ~] # ping 192.168.200.130PING 192.168.200.130 (192.168.200.130) 56 (84) bytes of data.64 bytes from 192.168.200.130: icmp_seq=1 ttl=64 time=0.754 ms64 bytes from 192.168.200.130: icmp_seq=2 ttl=64 time=0.372 ms64 bytes from 192.168.200.130: icmp_seq=3 ttl=64 time=0.372 ms64 bytes from 192.168.200.130: icmp _ seq=3 ttl=64 time=0.372 ms [root@localhost ~] # showmount-e 192.168.200.130Export list for 192.168.200.130:/usr/share * / opt/kgc 192.168.200.0/24/opt/accp 192.168.200.0According to the mounting website [root@localhost ~] # mount.nfs 192.168.200.130:/opt/kgc / var/www/html/ [root@localhost ~] # cd / var/www/html/ [root@localhost html] # echo "this is kgc web" > index.html [root@localhost html] # lsindex.html step 4: confirm whether there is a site file on the storage server [root@localhost ~] # cd / opt/ [root@localhost opt] # lskgc accp rh [root@localhost opt] # cd kgc/ [root@localhost accp] # cat index.html this is kgc web step 5: verify the web pages provided by the Web1 node server
Step 6: configure the Web2 node server # modify the ens33 network card configuration [root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO= "static" # change dhcp to staticDEVICE= "ens33" ONBOOT= "yes" IPADDR=192.168.200.120 # append IP address Subnet mask and gateway NETMASK=255.255.255.0GATEWAY=192.168.200.1 [root@localhost ~] # systemctl network restart [root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce 0 [root@localhost ~] # systemctl start httpd.service [root@localhost ~] # netstat-ntap | grep 80 tcp6 00: 80: * LISTEN 7315/httpd [root@localhost ~ ] # ping 192.168.200.130PING 192.168.200.130 (192.168.200.130) 56 (84) bytes of data.64 bytes from 192.168.200.130: icmp_seq=1 ttl=64 time=0.853 ms64 bytes from 192.168.200.130: icmp_seq=2 ttl=64 time=0.853 ms64 bytes from 192.168.200.130: icmp_seq=3 ttl=64 time=0.624 ms64 bytes from 192.168.200.130: icmp_seq=3 ttl=64 time=0.624 ms [root@localhost ~] # showmount-e 192.168 .200.130Export list for 192.168.200.130:/usr/share * / opt/kgc 192.168.200.0/24/opt/accp 192.168.200.0On24 [root@localhost ~] # mount.nfs 192.168.200.130:/opt/accp / var/www/html/ [root@localhost ~] # cd / var/www/html/ [root@localhost html] # echo "this is accp web" > index.html [root@localhost html] # cat index.htmlthis is Step 7 of accp web: make sure to view the site file [root@localhost ~] # ls / opt/kgc accp rh [root@localhost opt] # cd accp/ [root@localhost accp] # cat index.html this is accp web on the storage server step 8: verify the web pages provided by the Web2 node server
Step 9: configure the scheduling server # modify the ens33 network card configuration [root@localhost network-scripts] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO= "static" # change dhcp to staticDEVICE= "ens33" ONBOOT= "yes" # append IP address Subnet mask and gateway IPADDR=192.168.200.1NETMASK=255.255.255.0 [root@localhost ~] # cd / etc/sysconfig/network-scripts/ [root@localhost network-scripts] # lsifcfg-ens33 ifdown-ppp ifup-ib ifup-Teamifcfg-lo ifdown-routes ifup-ippp ifup-TeamPort [root@localhost network-scripts] # cp ifcfg-ens33 ifcfg-ens36# modify ens36 Nic [root@localhost network-scripts] # vim ifcfg-ens36BOOTPROTO= "static" # change dhcp to staticNAME= "ens36" # rename it to ens36UUID number delete DEVICE= "ens36" # rename to ens36ONBOOT= "yes" IPADDR=12.0.0.1 # append IP address Subnet mask and gateway NETMASK=255.255.255.0 [root@localhost network-scripts] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO= "static" # change dhcp to staticDEVICE= "ens33" ONBOOT= "yes" # append IP address Subnet mask and gateway IPADDR=192.168.200.1NETMASK=255.255.255.0 [root@localhost network-scripts] # systemctl network restart [root@localhost network-scripts] # vim / etc/sysctl.conf # append the entry net.ipv4.ip_forward=1# load route forwarding [root@localhost network-scripts] # sysctl-pnet.ipv4.ip_forward = 1 [root@localhost network-scripts] # iptables-t nat-F [root@localhost network-scripts] # iptables- F# configure SNAT forwarding rules [root@localhost network-scripts] # iptables-t nat-A POSTROUTING-o ens36-s 192.168.200.0 SNAT 24-j SNAT-- to-source 12.0.0.1 step 10: load LVS kernel module [root@localhost network-scripts] # modprobe ip_ vs [root @ localhost network-scripts] # cat / proc/net/ip_vsIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Step 11 of Weight ActiveConn InActConn: save the configuration item and start the service # Save the settings [root@localhost network-scripts] # ipvsadm-save > / etc/sysconfig/ipvsadm [root@localhost network-scripts] # systemctl start ipvsadm.service # configure the load distribution policy [root@localhost network-scripts] # cd / opt/ [root@localhost opt] # vim nat.shroughtBash # clear all records in the kernel virtual server table ipvsadm-C # add a new virtual server ipvsadm-A-t 12.0.0.1mipvsadm 80-s rr ipvsadm-a-t 12.0.0.1mipvsadm 80-r 192.168.200.110mipvsadm-a-t 12.0.0.1mipvsadm 80-r 192.168.200.120purl 80-m # effective load distribution policy [root@localhost opt] # load Version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP localhost.localdomain:http rr-> 129.168.200.110:http Masq 100-> 129.168.200.120:http Masq 100 final step: visit the web page using the Windows 7 client terminal
1. Visit the web page provided by the Web1 accessor for the first time
two。 After refreshing the web page, the web page provided by the Web2 server appears
After reading the above summary of LVS load balancing cluster, if you have anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineers for answers. Technical engineers have more than ten years of experience in the industry.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.