In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
LVS load balancing cluster
Understand the principle of load balancing cluster
Master the deployment of LVS-NAT
Enterprise Cluster Application Overview of the meaning of clustering:
1.Cluster, cluster, cluster
two。 It is composed of multiple hosts, but only as a whole.
In Internet applications, as the site has higher and higher requirements for hardware performance, response speed, service stability, data reliability and so on, a single server is unable to solve the problem:
1. Use expensive minicomputers and mainframes
two。 Build a service cluster using a normal server
Enterprise cluster classification can be divided into three types according to the target differences that the cluster is aimed at:
1. Load balancing cluster (polling, weighting of minimum connections)
two。 Highly available clusters (speed of access, reliability)
3. High performance computing cluster (concurrent processing tasks)
Load balancing Cluster (Load Balance Cluster):
1. With the goal of improving the responsiveness of the application system, handling as many access requests as possible, and reducing latency, the overall performance of high concurrency and load (LB) can be achieved.
The load distribution of 2.LB depends on the shunting algorithm of the master node.
High availability Cluster (High Availability Cluster):
1. The goal is to improve the reliability of the application system and reduce the downtime as much as possible, so as to ensure the continuity of services and achieve the fault tolerance effect of high availability (HA).
2.HA works in duplex mode and master-slave mode.
High performance Computing Cluster (High Performance Computer Cluster):
1. With the goal of improving the CPU operation speed, expanding hardware resources and analysis ability of the application system, the high performance computing (HPC) capability equivalent to that of large and supercomputers is obtained.
two。 The high performance of high performance computing cluster depends on "distributed computing" and "parallel computing". Through special hardware and software, resources such as CPU and memory of multiple servers are integrated together to realize the computing power that only large and supercomputers have.
Analysis of load balancing Cluster working Mode load balancing Cluster is the most commonly used cluster type in enterprises at present. There are three working modes of load scheduling technology for clusters:
1. Address translation
2.IP tunnel
3. Direct routing (DR)
NAT Mode address Translation (Network Address Translation):
1. Referred to as NAT mode, similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client and the access exit of each node to respond to the client.
two。 The server node uses a private IP address, which is located on the same physical network as the load scheduler, and its security is better than the other two methods.
TUN Mode IP Tunnel (IP Tunnel):
1. Referred to as the TUN mode, it adopts an open network structure. The load scheduler only serves as the access entrance for the client. Each node responds directly to the client through its own Internet connection instead of going through the load scheduler.
two。 The server nodes are scattered in different locations in the Internet, have independent public network IP addresses, and communicate with the load scheduler through a dedicated IP tunnel.
DR mode direct routing (Direct Routing):
1. Referred to as DR mode, it adopts a semi-open network structure, which is similar to that of TUN mode, but each node is not scattered in different places, but on the same physical network as the scheduler.
two。 The load scheduler connects with each node server through the local network, so there is no need to establish a dedicated IP tunnel.
Load balancing cluster architecture
# structure of load balancer:
1. Layer 1, load scheduler (Load Balancer or Director)
two。 Tier 2, server pool (Server Pool)
3. Layer 3, shared storage (Share Storage)
About the LVS virtual server Linux Virtual Server:
1. Load balancing solution for Linux Kernel
2. In May 1998, it was founded by Dr. Zhang Wensong of China.
3. Official website: http://www.linuxvirtualserver.org/
Load scheduling algorithm of LVS 1. Polling (Round Robin):
① distributes the received access requests sequentially to each node in the cluster (real server)
② treats each server equally, regardless of the actual number of connections and system load
two。 Weighted polling (Weighted Round Robin):
① allocates access requests in turn according to the processing capacity of the real server, and the scheduler can automatically query the load of each node and dynamically adjust its weight.
② ensures that servers with strong processing power bear more access traffic.
3. Minimum connection (Least Connections)
① allocates according to the number of connections established by the real server, giving priority to access requests received to the node with the least number of connections
4. Weighted least connection (Weighted Least Connections)
① can automatically adjust weights for real servers when the performance of server nodes varies greatly.
Nodes with higher ② weight will bear a greater proportion of active connection load.
Using ipvsadm tools to create and manage VS clusters
NFS shared storage service Network File System, network file system
1. Dependent on RPC (remote procedure call)
two。 Nfs-utils and rpcbind packages need to be installed.
3. System services: nfs, rpcbind
4. Shared profile: / etc/exports
Access NFS shared resources in the client
1. Install the rpcbind package and start the rpcbind service
two。 Manually mount the NFS shared directory
3.fstab auto mount settings
Demo:LVS load balancing cluster environment preparation:
CentOS 7-1: scheduler, gateway (requires two network cards) outside: 12.0.0.1 inside: 192.168.200.1
CentOS 7-2: website Server (Apache) 192.168.200.110
CentOS 7-3: website server (Apache) 192.168.200.120
CentOS 7-4: provides shared storage 192.168.200.130
Win7-1: client 12.0.0.12
Pre-operation of yum online installation: 1. Operation of shared storage server CentOS 7-4: [root@localhost ~] # rpm-Q nfs-utilsnfs-utils-1.3.0-0.48.el7.x86_64 [root@localhost ~] # rpm-Q rpcbind rpcbind-0.2.0-42.el7.x86_642. CentOS 7-2 and 7-3 Node servers: [root@localhost ~] # yum install httpd-y3. Scheduler gateway CentOS 7-1 operation: / / to add a network adapter to become two network cards [root@localhost ~] # yum install ipvsadm-y shared storage server CentOS 7-4 yum install ipvsadm / change the network card to host-only mode [root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO= "static" / / change dhcp to staticDEVICE= "ens33" ONBOOT= "yes" IPADDR=192.168.200.130 / / insert under the last line: IP Subnet Enter: wq save exit [root@localhost ~] # service network restart Restarting network (via systemctl): [OK] [root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce 0 [root@localhost ~] # systemctl start nfs.service [root@localhost ~] # systemctl status nfs.service ● nfs-server.service-NFS server and services after the modification of the gateway is completed Loaded: loaded (/ usr/lib/systemd/system/nfs-server.service) Disabled; vendor preset: disabled) Active: active (exited) since II 2019-11-26 17:42:05 CST; 11s ago. Omit multiple lines and the status is Active. Normal [root@localhost ~] # systemctl start rpcbind.service [root@localhost ~] # systemctl status rpcbind.service ● rpcbind.service-RPC bind service Loaded: loaded (/ usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled) Active: active (running) since II 2019-11-26 17:40:23 CST; 4min 26s ago. Omit multiple lines, and the status is Active: normal [root@localhost ~] # vim / etc/exports/usr/share * (ro,sync) / opt/accp 192.168.200.0 opt/accp 24 (rw,sync) / opt/benet 192.168.200.0 root@localhost 24 (rw,sync) / / enter: wq save and exit [root@localhost] # cd / opt/ [root@localhost opt] # mkdir benet accp [root@localhost opt] # ls-l Total usage 0drwxr-xr-x. 2 root root 6 November 26 17:50 accpdrwxr-xr-x. 2 root root 6 November 26 17:50 benetdrwxr-xr-x. 2 root root 6 March 26 2015 rh [root@localhost opt] # chmod 777 accp/ benet/ upgrade authority [root@localhost opt] # ls-l total usage 0drwxrwxrwx. 2 root root 6 November 26 17:50 accpdrwxrwxrwx. 2 root root 6 November 26 17:50 benetdrwxr-xr-x. 2 root root 6 March 26 2015 rh [root@localhost opt] # exportfs-rv / / publish the operation on exporting 192.168.200.0/24:/opt/benetexporting 192.168.200.0/24:/opt/accpexporting *: / usr/share node server (CentOS 7-2): [root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO= "static" / / change dhcp to staticDEVICE= "ens33" ONBOOT= "yes" IPADDR=192.168.200.110 / / insert under the last line: IP Subnet Enter: wq save and exit [root@localhost ~] # service network restart Restarting network (via systemctl): [OK] [root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce 0 [root@localhost ~] # systemctl start httpd.service [root@localhost ~] # netstat-ntap | grep 80 tcp6 00: 80:: * LISTEN 7315/httpd / / set the network adapter to host [root@localhost ~] # ping 192.168.200.130PING 192.168.200.130 (192.168.200.130) 56 (84) bytes of data.64 bytes from 192.168.200.130: icmp_seq=1 ttl=64 time=0.754 ms64 bytes from 192.168.200.130: icmp_seq=2 ttl=64 time=0.368 ms64 Bytes from 192.168.200.130: icmp_seq=3 ttl=64 time=0.398 ms [root@localhost ~] # showmount-e 192.168.200.130Export list for 192.168.200.130:/usr/share * / opt/benet 192.168.200.0/24/opt/accp 192.168.200.0Compact 24 [root@localhost ~] # mount.nfs 192.168.200.130:/opt/accp / var/www/html/ [root@localhost ~] # df-h file system Capacity used available mount point / dev/sda2 20G 4.3G 16G 22% / devtmpfs 898M 0898M 0% / devtmpfs 912M 0912m 0% / dev/shmtmpfs 912M 9.0M 903M 1% / runtmpfs 912M 0912M 0% / sys/fs/cgroup/dev/sda1 6.0G 174m 5.9G 3% / boot/dev/sda5 10G 54M 10G 1% / hometmpfs 183M 4.0K 183M 1% / run/user/42tmpfs 183M 20K 183M 1% / run/user/0/dev/sr0 4.3G 4.3G 0% / run/media/root/CentOS 7 x86_64192.168.200.130:/opt/ Accp 20G 3.4G 17G 17% / var/www/html [root@localhost ~] # cd / var/www/html/ [root@localhost html] # echo "this is accp web" > index.html [root@localhost html] # lsindex.html go back to storage server CentOS 7-4 to see if this file is available: [root@localhost ~] # cd / opt/ [root@localhost opt] # lsaccp benet rh [root@localhost opt] # cd accp/ [root@localhost accp] # lsindex.html [root@localhost accp] # cat index.html this is accp web// now have our newly created index.html file to verify: use Firefox browser on the CentOS 7-2 node server to enter: 127.0.0.1 to see if the web page content we wrote can be displayed:
Operation on the node server (CentOS 7-3): [root@localhost ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33BOOTPROTO= "static" / / change dhcp to staticDEVICE= "ens33" ONBOOT= "yes" IPADDR=192.168.200.120 / / insert: IP, subnet under the last line Enter: wq save and exit [root@localhost ~] # service network restart Restarting network (via systemctl): [OK] [root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce 0 [root@localhost ~] # systemctl start httpd.service [root@localhost ~] # netstat-ntap | grep 80 tcp6 00: : 80: * LISTEN 7315/httpd / / set the network adapter to host-only [root@localhost ~] # ping 192.168.200.130PING 192.168.200.130 (192.168.200.130) 56 (84) bytes of data.64 bytes from 192.168.200.130: icmp_seq=1 ttl=64 time=0.532 ms64 bytes from 192.168.200.130: icmp_seq=2 ttl=64 time=1.01 ms64 bytes from 192.168.200.130: icmp_seq=3 ttl=64 time=0.940 ms [root@localhost ~] # showmount-e 192.168.200.130Export list for 192.168.200.130:/usr/share * / opt/benet 192.168.200.0/24/opt/accp 192.168.200.0According to 24 [root@localhost ~] # mount.nfs 192.168.200.130:/opt/benet / var/www/html/ [root@localhost ~] # df-h file system Capacity used available mount point / dev/sda2 20G 3.4G 17G 17% / devtmpfs 898M 0898M 0% / devtmpfs 912M 0912m 0% / dev/shmtmpfs 912M 9.0M 903M 1% / runtmpfs 912M 0912M 0% / sys/fs / cgroup/dev/sda1 6.0G 174M 5.9G 3% / boot/dev/sda5 10G 54M 10G 1% / hometmpfs 183M 4.0K 183M 1% / run/user/42tmpfs 183M 20K 183M 1% / run/user/0/dev/sr0 4.3G 4.3G 0% / run/media/root/CentOS 7 x86 _ 64192.168.200.130:/opt/benet 20G 3.4G 17G 17% / var/www/html [root@localhost ~] # cd / var/www/html/ [root@localhost html] # echo "this is benet web" > index.html [root@localhost html] # lsindex.html go back to storage server CentOS 7-4 to see if this file is available: [root@localhost ~] # cd / opt/ [root@localhost opt] # lsaccp benet rh [root@localhost Opt] # cd accp/ [root@localhost accp] # lsindex.html [root@localhost accp] # cat index.html this is benet web// now has our newly created index.html file to verify: use Firefox browser on CentOS 7-3 node server to enter: 127.0.0.1 to see if the web page content we wrote can be displayed:
Dispatching Operation of gateway server CentOS 7-1: [root@localhost ~] # cd / etc/sysconfig/network-scripts/ [root@localhost network-scripts] # lsifcfg-ens33 ifdown-ppp ifup-ib ifup-Teamifcfg-lo ifdown-routes ifup-ippp ifup-TeamPort [root@localhost network-scripts] # cp ifcfg-ens33 ifcfg-ens36 [root@localhost network-scripts] # vim ifcfg-ens36BOOTPROTO= "static" / / change dhcp to staticNAME= "ens36" / / change the name to ens36UUID number delete DEVICE= "ens36" / / change the name to ens36ONBOOT= "yes" IPADDR=12.0.0.1 / / insert under the last line: IP Enter: wq save exit [root@localhost network-scripts] # vim ifcfg-ens33BOOTPROTO= "static" / / change dhcp to staticDEVICE= "ens33" ONBOOT= "yes" / / insert: IP under the last line after the subnet NETMASK=255.255.255.0// modification is completed After the subnet IPADDR=192.168.200.1NETMASK=255.255.255.0// modification is completed, enter: wq save exit [root@localhost network-scripts] # service network restart Restarting network (via systemctl): [OK] [root@localhost network-scripts] # vim / etc/sysctl.conf / / press o on the last line to insert net.ipv4.ip_forward=1// after modification is completed, enter: wq save exit [ Root@localhost network-scripts] # sysctl-pnet.ipv4.ip_forward = 1 [root@localhost network-scripts] # iptables-t nat-F [root@localhost network-scripts] # iptables-F [root@localhost network-scripts] # iptables-t nat-A POSTROUTING-o ens36-s 192.168.200.0Universe 24-j SNAT-- to-source 12.0.0.1win7-1 verify forwarding: first set the network card to host-only mode The Nic is configured with a static address and the firewall is turned off:
At this time, use the internal node server ping:12.0.0.12 to test whether ping can be reached: [root@localhost html] # ping 12.0.0.12PING 12.0.0.12 (12.0.0.12) 56 (84) bytes of data.64 bytes from 12.0.0.12: icmp_seq=1 ttl=127 time=1.14 ms64 bytes from 12.0.0.12: icmp_seq=2 ttl=127 time=1.78 ms64 bytes from 12.0.0.12: icmp_seq=3 ttl=127 time=1.02 ms// can ping at this time Explain that there is no problem loading the LVS kernel module: [root@localhost network-scripts] # modprobe ip_ vs [root @ localhost network-scripts] # cat / proc/net/ip_vsIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn startup service: (note: in CentOS 7, you must first save in the startup service, otherwise an error will be reported! ) [root@localhost network-scripts] # ipvsadm-- save > / etc/sysconfig/ipvsadm [root@localhost network-scripts] # systemctl start ipvsadm.service Writing configuration rules: [root@localhost network-scripts] # cd / opt/ [root@localhost opt] # vim nat.shrunk / / clear all records in the kernel virtual server table ipvsadm-A-t 12.0.0.1 ipvsadm-s rr / / Add a new virtual server ipvsadm-a-t 12.0.0.1 ipvsadm 80-r 192.168.200.110 mipvsadm// 80-mipvsadm-a-t 12.0.0.1 mipvsadm// input: wq save exit [root@localhost opt] # source nat.shIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP localhost.localdomain:http rr-> 129.168.200.110:http Masq 100-> 129.168.200.120:http Masq 100 Authentication: access 12.0.0.1 using win7-1 client terminal See if it can be successful:
Can be accessed at this time, indicating that the LVS load balancer cluster successfully provides services, and the experiment is successful!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.