Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation of LVS load balancing Cluster

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

This blog post contains the following:

1. Cluster type

2. Hierarchical structure of load balancing

3. The working mode of load balancing

4. Load scheduling algorithm of LVS

5. Basic commands related to LVS

6. Use ipvsadm management tools

7. Build a NFS shared storage server

8. Build a LVS load balancing cluster instance based on NAT mode

Depending on the production environment, the functions provided by the cluster vary, and the technical details adopted may vary. The concepts related to clustering technology are as follows:

1. Cluster type

No matter what kind of cluster it is, it includes at least two node servers, while it appears as a whole, providing only one access entry (domain name or IP address), which is equivalent to a mainframe computer. Depending on the target differences targeted by the cluster, it can be divided into the following three types:

Load balancing Cluster (LB): for the purpose of improving the response ability of the application system, handling as many access requests as possible, and reducing latency, the overall performance of high concurrency and high load can be achieved. For example, "DNS polling", "application layer switching", "reverse proxy" and so on can be used as load balancing clusters. The load distribution of LB depends on the shunting algorithm of the master node, which shares the access requests from the client to multiple server nodes, thus alleviating the load pressure on the whole system.

High availability cluster (HA): the goal is to improve the reliability of the application system, reduce the downtime as much as possible, ensure the continuity of services, and achieve the fault tolerance effect of high availability (HA), for example, "failover". "dual hot backup", "multi-machine hot backup", etc., all belong to high availability clustering technology, and the working mode of HA includes duplex and master-slave modes. Duplex means that all nodes are online at the same time; only the master node is online, but when a failure occurs, the slave node can automatically switch to the master node, which is similar to the HSRP principle of the Cisco router.

High performance Computing Cluster (HPC): aims to improve the CPU computing speed of application systems, expand hardware resources and analysis capabilities, and obtain high performance computing (HPC) capabilities equivalent to large and supercomputers. For example, "cloud computing" and "grid computing" can also be regarded as a kind of HPC. The high performance of HPC cluster depends on "distributed computing" and "parallel computing". It integrates the CPU, memory and other resources of multiple servers through special hardware and software to realize the computing power that only large and supercomputers have.

Different types of clusters can be merged according to actual needs, such as highly available load balancing clusters.

2. Hierarchical structure of load balancing

The figure above shows a typical load balancing cluster with three tiers. The functions of each layer are as follows:

Layer 1: the load scheduler, which is the only entrance to the entire cluster system, using the VIP (virtual IP) address common to all servers, also known as the cluster IP. Usually, the master and backup schedulers are configured to achieve hot backup. Ensure high availability.

The second tier: server pool, the application services provided by the cluster (such as HTTP, FTP) are undertaken by the server pool, in which each node has an independent RIP (real IP) address and only handles client requests distributed by the scheduler. When a node temporarily fails, the fault-tolerant mechanism of the load scheduler isolates it and restores it into the server pool after error troubleshooting.

The third layer: shared storage, which provides stable and consistent file access services for all nodes in the server pool, ensuring the unity of the entire cluster. In a Linux/UNIX environment, shared storage can use NAS devices or dedicated servers that provide NFS (Network File system) sharing services.

3. The working mode of load balancing

.

NAT mode: similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client and the access exit of each node to respond to the client. The server node uses a private IP address, which is located on the same physical network as the load scheduler, and its security is better than the other two methods, but the load scheduler is more stressful.

TUN mode: the open network structure is adopted, the load scheduler is only used as the access entrance of the client, and each node responds directly to the client through its own Internet connection, instead of going through the load scheduler. The server nodes are scattered in different locations in the Internet, have independent public network IP addresses, and communicate with the load scheduler through a dedicated IP tunnel.

DR mode: adopts semi-open network structure, which is similar to that of TUN mode, but each node is not scattered in different places, but is located in the same physical network as the scheduler. The load scheduler connects with each node server through the local network, so there is no need to establish a dedicated IP tunnel.

LVS is a load balancing project developed for the Linux kernel. The official website is: http://www.linuxvirtualserver.org/ can go to the official website to consult relevant technical documents. LVS, which is now part of the Linux kernel, is compiled as an ip_vs module by default and can be invoked automatically if necessary.

4. Load scheduling algorithm of LVS

Polling (rr): the received access requests are distributed sequentially to the nodes in the cluster (real servers), and each server is treated equally, regardless of the actual number of connections and system load on the server.

Weighted polling (wrr): the received access requests are allocated in turn according to the processing capacity of the real server. The scheduler can automatically query the load of each node and adjust its weight dynamically. This ensures that servers with strong processing power bear more access traffic.

Minimum connection (lc): according to the number of connections established by the real server, the access requests received are given priority to the node with the least number of connections. If the performance of all server nodes is similar, this method can better balance the load.

Weighted least connection (wlc): in the case of large differences in the performance of server nodes, the weight can be automatically adjusted for real servers, and the nodes with higher weights will bear a larger proportion of the active connection load.

5. Basic commands related to LVS

The default ip_vs module is not loaded. You can load the ip_vs module by executing the following command:

[root@localhost ~] # modprobe ip_vs # load ip_vs module [root@localhost ~] # lsmod | grep ip_vs # check whether the ip_vs module has been loaded with ip_vs 141432 0 nf_conntrack 133053 8 ip_vs,nf_nat,nf_nat_ipv4,.libcrc32c 12644 4 xfs,ip_vs,nf_nat Nf_ conntrack [root @ localhost ~] # modprobe-r ip_vs # remove ip_vs module [root ~] # lsmod | grep ip_ vs [root @ localhost ~] # modprobe ip_ vs [root @ localhost ~] # cat / proc/net/ip_vs # View ip_vs version information IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn

6. Use ipvsadm management tools

Ipvsadm is a LVS cluster management tool used on the load scheduler that calls the ip_vs module to add and remove server nodes, and to view the operational status of the cluster.

[root@localhost ~] # yum-y install ipvsadm # install ipvsadm tool [root@localhost ~] # ipvsadm-v # View ipvsadm version ipvsadm v1.27 2008-5-15 (compiled with popt and IPVS v1.2.1)

1) create a virtual server using the ipvsadm tool:

If the VIP address of the cluster is 200.0.0.1, and the load shunting service is provided for port TCP 80, and the scheduling algorithm is rr, the corresponding command is as follows. For the load balancer scheduler, VIP must be the IP address actually enabled on the local machine:

[root@localhost] # ipvsadm-A-t 200.0.0.1 rr 80-s rr

2) add a server node:

Add four server nodes to the virtual server 200.0.0.1 with the IP address 192.168.1.2room5, with the following command:

[root@localhost] # ipvsadm-a-t 200.0.0.1 root@localhost 80-r 192.168.1.2 root@localhost 80-m-w 1 [root@localhost] # ipvsadm-a-t 200.0.0.1 root@localhost 80-r 192.168.1.3 3RV 80-m-w 1 [root@localhost ~] # ipvsadm-a-t 200.0.0.1R80-r 192.168.1.4Rule 80-m-w 1 [root@localhost ~] # ipvsadm- A-t 200.0.0.1 size=4096 80-r 192.168.1.5 IP Virtual Server version 80-m-w 1 [root@localhost ~] # ipvsadm-ln # View node status IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.0.1 ln 80 rr-> 192.168.1.2 IP Virtual Server version 80 Masq 1 0-> 192.168.1.3 Masq 80 Masq 100-> 192.168.1.4 Masq 80 Masq 100-> 192.168.1.4 Masq 80 100-> 192.168.1.5 Masq 100-> 192.168.1.5 Masq 100

3) Delete the server node:

Use the option-d when you need to remove a node from the server pool. To perform the delete operation, you must specify the target object, including the node address and virtual IP address. For example, the following action will delete the node 192.168.1.5 in LVS cluster 200.0.0.1.

[root@localhost] # ipvsadm-d-r 192.168.1.5 80-t 200.0.0.1

When you need to delete the entire virtual server, use the option-D and specify the virtual IP instead of specifying a node. For example, if you execute "ipvsadm-D-t 200.0.0.1 80", delete the virtual server.

4) Save the load distribution policy:

Use the export / import tool ipvsadm-save/ipvsadm-restore to save and restore the LVS policy (which needs to be re-imported after the server is restarted).

[root@localhost ~] # hostname lvs # change the hostname [root@localhost ~] # bash # to make the changed hostname take effect immediately [root@lvs ~] # ipvsadm-save > / etc/sysconfig/ipvsadm.bak # Save policy [root@lvs ~] # cat / etc/sysconfig/ipvsadm.bak # confirm the save result-A-t 200 .0.0.1: http-s rr-a-t 200.0.0.1:http-r 192.168.1.2:http-m-w 1mura-t 200.0.0.1:http-r 192.168.1.3:http-m-w 1ltel a-t 200.0.0.1:http-r 192.168.1.4:http-m-w 1 [root@localhost ~] # ipvsadm-C # clear Current policy [root@localhost ~] # ipvsadm- ln # confirms that the current cluster policy has been cleared IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@localhost ~] # ipvsadm-restore

< /etc/sysconfig/ipvsadm.bak #导入刚才备份的策略[root@localhost ~]# ipvsadm -ln #查看群集策略是否导入成功IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags ->

RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 200.0.0.1 Masq 80 rr-> 192.168.1.2 Masq 80 Masq 1 00-> 192.168.1.3 Masq 80 Masq 1 00-> 192.168.1.4 Masq 80 Masq 1 00

7. Build a NFS shared storage server

NFS is a network file system protocol based on TCP/IP transmission. By using NFS protocol, clients can access resources in remote servers as if they were visiting local directories. For most load balancing clusters, it is common to use NFS protocol to share data storage, and NFS is also a protocol that must be supported by NAS storage devices.

Use NFS to publish shared resources:

1) install the relevant software packages:

[root@localhost ~] # yum-y install nfs-utils rpcbind # installation software package [root@localhost ~] # systemctl enable nfs # set NFS boot self-boot [root@localhost ~] # systemctl enable rpcbind # set rpcbind boot self-boot

2) set the shared directory:

[root@localhost ~] # mkdir-p / opt/wwwroot # create a directory that needs to be shared [root@localhost ~] # vim / etc/exports # Edit NFS's configuration file. Default is empty / opt/wwwroot 192.168.1.0 opt/wwwroot 24 (rw,sync,no_root_squash)

When you need to share the same directory with different clients and assign different permissions, you can simply specify multiple "clients (permission options)" separated by spaces. As follows:

[root@localhost] # vim / etc/exports / var/ftp/pub 192.168.2.1 (ro,sync) 192.168.2.3 (rw,sync)

3) reload the NFS service program:

[root@localhost ~] # systemctl restart rpcbind [root@localhost ~] # systemctl restart nfs [root@localhost ~] # netstat-anpt | grep rpctcp 0 0 0.0 0. 0 root@localhost 43759 0. 0 0. 0 LISTEN 76307/rpcbind tcp * LISTEN 76336/rpc.statd tcp 0 0 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 00 0.0.0.0 LISTEN 76350/rpc.mountd tcp6 20048 0.0.0.0 LISTEN 76350/rpc.mountd tcp6 00: 111l:: * LISTEN 76307/rpcbind tcp6 00: 20048:: * LISTEN 76350/rpc.mountd tcp6 0 0:: 38355:: * LISTEN 76336/rpc.statd [root@localhost ~] # showmount-e # View the NFS shared directory Export list for localhost.localdomain:/opt/wwwroot 192.168.1.0/24/var/ftp/pub 192.168.2.3192.168.2.1 released by this machine

4) access NFS shared resources on the client:

The goal of the NFS protocol is to provide a network file system, so the access to the NFS share is also mounted using the mount command, and the corresponding file system type is nfs, which can be mounted manually or by adding fstab configuration files to achieve boot automatic mount. Considering the network stability in the cluster system, it is best to use a proprietary network to connect between the NFS server and the client.

1. Install the rpcbind package and start the rpcbind service. In order to use the showmount query tool, install nfs-utils as well:

[root@localhost ~] # yum-y install nfs-utils rpcbind [root@localhost ~] # systemctl enable rpcbind [root@localhost ~] # systemctl start rpcbind

two。 Query which directories are shared on the NFS server:

[root@localhost ~] # showmount-e 192.168.1.1 # specify the server address to query Export list for 192.168.1.1:/opt/wwwroot 192.168.1.0/24/var/ftp/pub 192.168.2.3192.168.2.1

3. Manually mount the NFS shared directory, and set the auto mount when powered on:

[root@localhost ~] # mount 192.168.1.1:/opt/wwwroot / var/www/html # mount to the local [root@localhost ~] # df-hT / var/www/html # to see if the file system type capacity has been successfully mounted available% mount point 192.168.1.1:/opt/wwwroot nfs4 17G 6.2g 11G 37% / var/www/html [root@localhost ~] # vim / etc/fstab # set automatic mount. 192.168.1.1: / opt/wwwroot / var/www/html nfs defaults _ netdev 0 0

After the mount is complete, accessing the client's / var/www/html folder is equivalent to accessing the / opt/wwwroot folder in the NFS server, where the network mapping process is completely transparent to the user program.

For an example of LVS cluster configuration based on NAT working mode, please refer to the blog article: building a LVS load balancing Cluster based on NAT Mode

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report