Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

LVS Cluster Application Foundation and Building NFS shared Storage Service

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

The following brings you LVS cluster application foundation and building NFS shared storage service, hoping to bring some help to you in practical application. Load balancing involves more things, there are not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.

1. LVS Cluster Application Foundation

Depending on the actual enterprise environment, the functions provided by the cluster are different, and the technical details adopted may also be different. However, on the whole, we need to understand some common characteristics of the cluster first, so that we can have a clear idea in the work of building and maintaining the cluster and avoid blindness in operation.

1. Type of cluster

No matter what kind of cluster it is, it includes at least two node cloud servers, while the external performance is as a whole, providing only one access entry (domain name or IP address), which is equivalent to a large computer. According to the target difference of the cluster, it can be divided into the following three types.

Load balancing Cluster (Load Balance Cluster): for the purpose of improving the response ability of the application system, handling as many access requests as possible, and reducing latency, the overall performance of high concurrency and high load can be achieved. For example, "DNS polling", "application layer switching", "reverse proxy" and so on can be used as load balancing clusters. The load distribution of LB depends on the shunting algorithm of the master node, which shares the access requests from the client to multiple server nodes, thus alleviating the load pressure on the whole system.

High availability cluster (High Availability Cluster): the goal is to improve the reliability of the application system, reduce the downtime as much as possible, ensure the continuity of services, and achieve the fault tolerance effect of high availability (HA), for example, "failover". "dual hot backup", "multi-machine hot backup", etc., all belong to high availability clustering technology, and the working mode of HA includes duplex and master-slave modes. Duplex means that all nodes are online at the same time; only the master node is online, but when a failure occurs, the slave node can automatically switch to the master node, which is similar to the HSRP principle of the Cisco router.

High performance Computing Cluster (High Performance Computer Cluster): aims to improve the CPU computing speed of application systems, expand hardware resources and analysis capabilities, and obtain high performance computing (HPC) capabilities equivalent to large and supercomputers. For example, "cloud computing" and "grid computing" can also be regarded as a kind of HPC. The high performance of HPC cluster depends on "distributed computing" and "parallel computing". It integrates the CPU, memory and other resources of multiple servers through special hardware and software to realize the computing power that only large and supercomputers have.

Different types of clusters can be merged according to actual needs, such as highly available load balancing clusters.

2. Hierarchical structure of load balancing

Layer 1: the load scheduler, which is the only entrance to the entire cluster system, using the VIP (virtual IP) address common to all servers, also known as the cluster IP. Usually, the master and standby schedulers are configured to achieve hot backup, and when the primary scheduler fails, it is smoothly replaced to the standby scheduler to ensure high availability.

The second tier: server pool, the application services provided by the cluster (such as HTTP, FTP) are undertaken by the server pool, in which each node has an independent RIP (real IP) address and only handles client requests distributed by the scheduler. When a node temporarily fails, the fault-tolerant mechanism of the load scheduler isolates it and restores it into the server pool after error troubleshooting.

The third layer: shared storage, which provides stable and consistent file access services for all nodes in the server pool, ensuring the unity of the entire cluster. In a Linux/UNIX environment, shared storage can use NAS devices or dedicated servers that provide NFS (Network File system) sharing services. 3. The working mode of load balancing

The load scheduling technology of the cluster can be distributed based on IP, port, content, etc. The load scheduling based on IP is the most efficient. In the IP-based load balancing mode, address translation, IP tunnel and direct routing are three common working modes, as shown below:

Address translation: referred to as NAT mode, similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client, but also the access exit of each node to respond to the client. The server node uses a private IP address, which is located on the same physical network as the load scheduler, and its security is better than the other two methods, but the load scheduler is more stressful.

IP tunnel: referred to as TUN mode, uses an open network structure, and the load scheduler only acts as the access entrance of the client. Each node responds directly to the client through its own Internet connection, instead of going through the load scheduler. The server nodes are scattered in different locations in the Internet, have independent public network IP addresses, and communicate with the load scheduler through a dedicated IP tunnel.

Direct routing, referred to as DR mode, adopts a semi-open network structure, which is similar to that of TUN mode, but each node is not scattered everywhere, but is located in the same physical network as the scheduler. The load scheduler connects with each node server through the local network, so there is no need to establish a special IP tunnel.

Among the above three working modes, NAT mode requires only one public network IP address, which makes it the most easy to use load balancing mode with good security. Many hardware load balancer devices use this mode. Comparatively speaking, DR mode and TUN mode have more powerful load capacity and wider scope of application, but the node security is slightly worse.

2. LVS virtual server

Linux Virtual Server is a load balancing project developed for the Linux kernel. The official website is: http://www.linuxvirtualserver.org/ can go to the official website to consult relevant technical documents. LVS is actually equivalent to a virtualized application based on IP address, which provides an efficient solution for load balancing based on IP address and content request distribution. LVS, which is now part of the Linux kernel, is compiled as an ip_vs module by default and can be invoked automatically if necessary. On Centos 7 systems, you can manually load the ip_vs module and view the version information of the ip_vs module in the current system by doing the following:

Load scheduling algorithm for [root@centos01 ~] # modprobe ip_vs [root@centos01 ~] # cat / proc/net/ip_vs IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn1, LVS

According to the needs of different network services and configurations, LVS scheduler provides a variety of different load scheduling algorithms, of which the four most commonly used algorithms include polling, weighted polling, minimum connection and weighted least connection.

Polling (rr): the received access requests are distributed sequentially to the nodes in the cluster (real servers), and each server is treated equally, regardless of the actual number of connections and system load on the server.

Weighted polling (wrr): the received access requests are allocated in turn according to the processing capacity of the real server. The scheduler can automatically query the load of each node and adjust its weight dynamically. This ensures that servers with strong processing power bear more access traffic.

Minimum connection (lc): according to the number of connections established by the real server, the access requests received are given priority to the node with the least number of connections. If the performance of all server nodes is similar, this method can better balance the load.

Weighted least connection (wlc): in the case of large differences in the performance of server nodes, the weight can be automatically adjusted for real servers, and the nodes with higher weights will bear a larger proportion of the active connection load. 2. Use ipvsadm management tools

Ipvsadm is a LVS cluster management tool used on the load scheduler that calls the ip_vs module to add and remove server nodes, and to view the operational status of the cluster.

[root@centos01 ~] # yum-y install ipvsadm [root@centos01 ~] # ipvsadm-v ipvsadm v1.27 2008-5-15 (compiled with popt and IPVS v1.2.1) 1) create a virtual server

If the VIP address of the cluster is 172.16.16.172, and the load diversion service is provided for port TCP 80, and the scheduling algorithm is polling, the corresponding ipvsadm command operation is as follows. For the load balancer scheduler, VIP must be the IP address actually enabled on the local machine:

[root@centos01] # ipvsadm-A-t 172.16.16.172 rr 80-s rr

In the above operation, option-A means to add a virtual server,-t is used to specify the VIP address and TCP port, and-s is used to specify the load scheduling algorithm-polling (rr), weighted polling (wrr), minimum connections (lc), and weighted least connections (wlc).

2) add server node

Is a virtual server 172.16. 16.172 add four server nodes, the IP address is 192.168.7.21x24, and the corresponding ipvsadm command operation is shown below. If you want to use keepalive, you should also add the "- p 60" option, where 60 is the hold time in s:

[root@centos01 ~] # ipvsadm-a-t 172.16.172root@centos01 80-r 192.168.7.21VIE80-m-w 1 [root@centos01 ~] # ipvsadm-a-t 172.16.172VIE80-r 192.168.7.22 root@centos01 80-m-w 1 [root@centos01 ~] # ipvsadm-a-t 172.16.172R80-r 192.168.7.23R80-m-w 1 [root@centos01 ~] # ipvsadm- A-t 172.16.16.172purl 80-r 192.168.7.24purl 80-m-w 1

In the above operation, the option-a means to add a real server,-t is used to specify the VIP address and TCP port,-r is used to specify the RIP (real IP) address and TCP port,-m means to use the NAT cluster mode (- g DR mode and-I TUN mode), and-w is used to set the weight (when the weight is 0, the node is paused).

3) View the status of cluster nodes

Combined with option-l, you can view LVS virtual servers in a list. You can specify to view only one VIP address (view all by default). Combined with option-n, address, port and other information will be displayed in numeric form:

[root@centos01] # ipvsadm-lnIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 172.16.172 rr 80 rr-> 192.168.7.21 root@centos01 80 Masq 100-> 192.168.7.22 lnIP Virtual Server version 80 Masq 100- > 192.168.7.23 Masq 80 Masq 100-> 192.168.7.24 Masq 100 0

The Masquerade (address masquerading) under the Forward column in the above output indicates that the cluster mode is NAT;. If it is Route, it means that the cluster mode is DR.

4) Delete the server node

Use the option-d when you need to remove a node from the server pool. To perform the delete operation, you must specify the target object, including the node address and the virtual IP address. For example, the following action will delete node 192.168.7.24 in LVS cluster 172.16.16.172.

[root@centos01] # ipvsadm-d-r 192.168.7.24 80-t 172.16.16.172

When you need to delete the entire virtual server, use the option-D and specify a virtual IP address instead of a node. For example, if you execute "ipvsadm-D-t 172.16.16.172VR 80", delete this virtualized server.

5) Save the load distribution policy

Use the export / import tool ipvsadm-save/ipvsadm-restore to save and restore LVS policies. Of course, you can also quickly clear and rebuild the load distribution policy.

[root@centos01 ~] # ipvsadm-save > / etc/sysconfig/ipvsadm [root@centos01 ~] # cat / etc/sysconfig/ipvsadm-A-t 172.16.16.172:http-s rr-a-t 172.16.16.172:http-r 192.168.7.21:http-m-w 1mura-t 172.16.16.172:http-r 192.168.7.22:http-m-w 1Mul a-t 172.16.16.172:http -r 192.168.7.23:http-m-w 1 [root@centos01 ~] # systemctl stop ipvsadm [root@centos01 ~] # systemctl start ipvsadm [root@centos01 ~] # systemctl enable nfs [root@centos01 ~] # systemctl enable rpcbind2) set the shared directory

The configuration file for NFS is / etc/exports, and the contents of the file are empty by default (no shares). When you set up a shared resource in an exports file, the record format is directory location client address (permission option). For example, if you want to share the folder / opt/wwwroot with the 172.16.16.0ax 24 network segment, allow read and write operations, as follows:

[root@centos01 ~] # mkdir-p / opt/wwwroot [root@centos01 ~] # vim / etc/exports / opt/wwwroot 192.168.100.0 + 24 (rw,sync,no_root_squash)

* in the above configuration, "192.168.100.0gam24" indicates the address of the client that is allowed to access, which can be hostname, IP address, network segment address, allowed use,? Wildcard; rw in the permission option means read-write is allowed (ro is read-only), sync is synchronous write, and no_root_squash means that the current client is granted local root permission when accessing as root (the default is root_squash, which is treated as a nfsnobody user). **

When you need to share the same directory with different clients and assign different permissions, you can simply specify multiple "clients (permission options)" separated by spaces. For example, the following operation shares the / var/ftp/public directory with two clients and gives read-only and read-write permissions, respectively.

[root@centos01 ~] # vim / etc/exports/var/ftp/pub 192.168.4.11 (ro) 192.168.4.110 (rw) 3) start the NFS service program [root@centos01 ~] # systemctl start rpcbind [root@centos01 ~] # systemctl start nfs [root@centos01 ~] # netstat-anptu | grep rpcbindudp 00 0.0.0.0 rw 965 0.0.0.0 2064/rpcbind udp 0 0 0.0.0.0 2064/rpcbind udp6 111 0.0.0.0 2064/rpcbind udp6 0 0: 965: 111:: * 2064/rpcbind 4) View the NFS shared directory released by this machine [root@centos01 ~] # showmount-eExport list for centos01:/opt/wwwroot 192.168.100.0/24/var/ftp/pub 192.168.4.110192.168.4.112, Access NFS shared resources in the client

The goal of the NFS protocol is to provide a network file system, so the access to the NFS share is also mounted using the mount command, and the corresponding file system type is nfs, which can be mounted manually or by adding fstab configuration files to achieve boot automatic mount. Considering the network stability in the cluster system, it is best to use a proprietary network to connect between the NFS server and the client.

1) install the rpcbind package and start the rpcbind service

To properly access NFS shared resources, you also need to install the rpcbind package in the client and start the rpcbind system service. In addition, in order to use the showmount query tool, it is recommended that the nfs-utils package be installed as well.

[root@centos02 ~] # yum-y install nfs-utils rpcbind [root@centos02 ~] # systemctl start rcpbind [root@centos02 ~] # systemctl enable rcpbind

If the client already has the nfs-utils package installed, the client can also use showmount to see which directories are shared on the NFS server side in the query format of "showmount-e server address".

[root@centos02 ~] # showmount-e 192.168.100.10Export list for 192.168.100.10:/opt/wwwroot 192.168.100.0/24/var/ftp/pub 192.168.4.110192.168.4.112) manually mount the NFS shared directory

Mount as root and mount the / opt/wwwroot directory shared by the NFS server to the local directory / var/www/html. Unlike mounting the local file system, the server address should be indicated at the device location.

[root@centos02 ~] # mount 192.168.100.10:/opt/wwwroot / var/www/html [root@centos02 ~] # df-hT / var/www/html File system Type capacity available available mount point 192.168.100.10:/opt/wwwroot nfs4 76G 3.7G 73G 5% / var/www/html3) fstab automatic mount setting

Modify the / etc/fstab configuration file to add the mount settings of the NFS shared directory. Note that set the file system type to nfs, and add the mount parameter _ netdev (the device needs network). If you add soft and intr parameters, you can achieve soft mount, allowing the mount to be abandoned in case of network interruption. In this way, the client can automatically mount NFS shared resources after each boot.

[root@centos02] # vim / etc/fstab. 192.168.100.10:/opt/wwwroot / var/www/html nfs defaults,_netdev 00

After reading the above about LVS cluster application foundation and building NFS shared storage service, if there is anything else you need to know, you can find what you are interested in in the industry information or find our professional and technical engineer to answer, the technical engineer has more than ten years of experience in the industry.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report