In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Load balancing (Cloud Load Balancer) is a service that distributes traffic to multiple CVMs. Load balancing can expand the external service capacity of the application system through traffic distribution and improve the availability of the application system by eliminating a single point of failure. The cloud load balancer service virtualizes multiple CVM resources located in the same region into a high-performance and highly available application service pool by setting virtual service addresses (VIP). Distribute the network requests from the client to the cloud server pool according to the way specified by the application. The load balancer service checks the health status of the CVM instances in the CVM pool and automatically isolates the abnormal instances, thus solving the single point problem of the CVM and improving the overall service capability of the application. Today, I will introduce to you some detailed answers to the concept of LVS load balancing cluster.
In a variety of Internet applications, as the site has higher and higher requirements for hardware performance, response speed, service stability, data reliability and so on, it will be difficult for a single cloud server to undertake all the access. In addition to using expensive mainframes and dedicated load shunting devices, another option for enterprises to solve the problem is to build clustered servers-through the entire number of relatively cheap ordinary servers, provide the same services at the same address.
In Linux system, there is a very commonly used clustering technology-LVS (Linux Virtual Server,Linux Virtual Server).
Blog outline:
I. Overview of Cluster Technology
2. Detailed explanation of LVS virtual server
III. Detailed explanation of NFS shared storage service
LVS Cluster Application Foundation
Cluster: a cluster of meaning, used in the server field to represent a collection of a large number of servers to distinguish them from a single server.
I. Overview of Cluster Technology
Depending on the actual enterprise environment, the functions provided by the cluster are different, and the technical details adopted may also be different. However, on the whole, we need to understand some common characteristics of the cluster first, so that we can have a clear idea in the work of building and maintaining the cluster and avoid blindness in operation.
1. Type of cluster
No matter what kind of cluster it is, it includes at least two node servers, while it appears as a whole, providing only one access entry (domain name or IP address), which is equivalent to a mainframe computer.
Depending on the target differences targeted by the cluster, it can be divided into the following three types:
Load balancing Cluster (Load Balance Cluster): referred to as LB, aims to improve the response ability of the application system, handle as many access requests as possible, reduce latency, and achieve the overall performance of high concurrency and high load. For example, "DNS polling", "application layer switching", "reverse proxy" and so on can be used as load balancing clusters. The load distribution of LB depends on the shunting algorithm of the master node, sharing the access requests from the client to multiple server nodes, thus alleviating the load pressure of the whole system; high availability cluster (High Availability Cluster), referred to as HA, aims to improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of services, and achieve the fault tolerance effect of high availability (HA). For example, "failover", "dual-computer hot backup", "multi-machine hot backup" and so on belong to high-availability clustering technology. The working mode of HA includes duplex mode and master-slave mode. Duplex means that all nodes are online at the same time; master-slave only master node is online, but when a failure occurs, the slave node can automatically switch to the master node; high-performance computing cluster (High Performance Computer Cluster): referred to as HPC, aims to improve the CPU computing speed of the application system, expand hardware resources and analysis capabilities, and obtain high-performance computing (HPC) capabilities equivalent to large-scale and supercomputers. For example, "cloud computing" and "grid computing" can also be regarded as a kind of high-performance computing. The high performance of high-performance computing cluster depends on "distributed computing" and "parallel computing". It integrates the CPU, memory and other resources of multiple servers through special hardware and software to realize the computing power that only large and supercomputers have.
Different types of clusters can be merged when necessary, such as highly available load balancing clusters.
two。 Hierarchical structure of load balancing
In a typical load balancing cluster, there are three levels of components, and at least one load scheduler at the front end is responsible for responding and distributing access requests from the client. The backend consists of a large number of real servers to form a server pool to provide practical application services. the scalability of the whole cluster is accomplished by adding and deleting server nodes, and these processes are transparent to the client. In order to maintain the consistency of services, all nodes use shared storage devices. As shown in the figure:
A detailed description of each layer in the figure:
Layer 1, load scheduler: this is the only entrance to the entire cluster system, using the server's public VIP (virtual IP) address, also known as the cluster IP address. Usually, the master and standby schedulers are configured to achieve hot backup, and when the primary scheduler fails, it is smoothly replaced to the standby scheduler to ensure high availability; in the second layer, the server pool: the application services provided by the cluster (such as HTTP, FTP) are undertaken by the server pool, where each node has an independent RIP (real IP) address and only handles client requests distributed by the scheduler. When a node fails temporarily, the fault-tolerant mechanism of the load scheduler isolates it and waits for the error to be eliminated before returning it to the server pool; layer 3, shared storage: provides stable and consistent file access services for all nodes in the server pool to ensure the unity of the entire cluster. In Linux/UNIX environment, shared storage can use NAS devices, or dedicated servers that provide NFS (network file system) sharing services; 3. Working mode of load balancing
The load scheduling technology of the cluster can be distributed based on IP, port, content, etc., among which the load scheduling based on IP is the most efficient.
In the IP-based load balancing model, the common ones are:
Address translation: referred to as NAT mode, similar to the private network structure of the firewall, the load scheduler serves as the gateway for all server nodes and as the access portal for clients. The server node uses a private IP address, which is on the same physical network as the load scheduler, and its security is better than the other two ways. As shown in the figure:
For the implementation steps of the NAT pattern, please refer to the blog post: the construction of LVS load balancing cluster NAT mode, you can follow!
IP tunnel: referred to as TUN mode, uses an open network structure, the load scheduler only serves as the access entrance of the client, and each node responds directly to the client through its own Internet connection, instead of going through the load scheduler. The server nodes are scattered in different locations in the Internet, have independent public network IP addresses, and communicate with the load scheduler through a dedicated IP tunnel. As shown in the figure:
Direct routing, referred to as DR mode, adopts a semi-open network structure, which is similar to that of TUN mode, but each node is not scattered everywhere, but is located in the same physical network as the scheduler. Responsible for the scheduler and each node server through the local network connection, does not need to establish a dedicated IP tunnel. As shown in the figure:
For the implementation steps of the DR pattern, please refer to the blog post: the construction of LVS load balancing cluster DR mode, you can follow!
Among the above three working modes, NAT mode only needs a public network IP address, which makes it the most easy to use load balancing mode with good security. Many hardware load balancer devices use this method. Comparatively speaking, DR mode and TUN mode have more powerful load capacity and wider range of use, but the node security is slightly worse.
2. Detailed explanation of LVS virtual server
Linux Virtual Server is a load balancing project developed for the Linux kernel, and its official website is http://www.linuxvirtualserver.org/. LVS is actually equivalent to a virtualized application based on IP address, which provides an efficient solution for load balancing based on IP address and content request distribution.
LVS, which is now part of the Linux kernel, is compiled as an ip_vs module by default and can be invoked automatically if necessary. On Centos 7 systems, manually load the ip_vs module and execute the following command:
[root@localhost ~] # modprobe ip_vs / / load IP_vs module [root@localhost ~] # cat / proc/net/ip_vs / / View IP_vs version information IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> load scheduling algorithm for RemoteAddress:Port Forward Weight ActiveConn InActConn1.LVS
According to different network service and configuration needs, LVS scheduler provides a variety of different load scheduling algorithms, some of which are the most commonly used:
Round Robin: distribute the received access requests sequentially to each node in the cluster (real server) and treat each server equally, regardless of the actual number of connections and system load of the server; weighted polling (Weighted Round Robin): according to the processing capacity of the real server, the scheduler can automatically query the load of each node and adjust its weight dynamically. This ensures that the server with strong processing capacity bears more access traffic; Least Connections: according to the number of connections established by the real server, the received access requests are allocated first to the node with the least number of connections. If the performance of all server nodes is similar, this method can achieve better load balancing; weighted least connection (Weighted Least Connections): when the performance of server nodes varies greatly, the weight can be adjusted automatically for real servers, and the nodes with higher weight will bear a larger proportion of the active connection load. two。 Use ipvsadm management tools
Ipvsadm is a LVS cluster management tool used on the load scheduler that calls the ip_vs module to add and remove server nodes, and to view the operational status of the cluster.
The CentOS 7 system is not installed by default, so you need to install it yourself!
[root@localhost ~] # yum-y install ipvsadm [root@localhost ~] # ipvsadm-vipvsadm v1.27 2008-5-15 (compiled with popt and IPVS v1.2.1)
How to operate the ipvsadm command:
[root@localhost ~] # ipvsadm-A-t 192.168.1.254The VIP address of the 80-s rr// cluster is 192.168.1.254, which provides load diversion service for port 80 of TCP, and the scheduling algorithm is polling.
The meaning of the options in the command:
"- A" means to add a virtual server; "- t" is used to define VIP addresses and TCP ports; "- s" is used to develop load scheduling algorithms.
Polling (rr), weighted polling (wrr), least connections (lc), weighted least connections (wlc). [root@localhost ~] # ipvsadm-a-t 192.168.1.254VR 80-r 192.168.1.100-m-w 1DB / add a real server to the virtual server with an IP of 192.168.1.254
The meaning of the options (and related options) in the command:
-a: add a real server;-t: specify VIP address and TCP port;-r: specify RIP address and TCP port (port can be omitted);-m: use NAT cluster mode;-g: use DR cluster mode;-I: use TUN cluster mode;-w: set weight (0 indicates pausing node, default is 1) [root@localhost ~] # ipvsadm-ln// View Cluster Node status IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 192.168.1.254ln// 80 rr-> 192.168.1.100 ln// 80 Masq / / Forward column corresponds to Masq (address camouflage), indicating that the cluster mode is NAT ActiveConn column: current number of connections; InActConn column: number of active connections [root@localhost ~] # ipvsadm- d-r 192.168.1.100 VIP 80-t 192.168.1.254 root@localhost / delete the node of 192.168.1.254 from the cluster (- d means deletion) [root@localhost ~] # ipvsadm- D-t 192.168.1.254root@localhost / delete the 192.168.1.254 VIP (which means the cluster is gone) [root@localhost ~] # ipvsadm-save// view Set policy [root@localhost ~] # ipvsadm-save > 123.txt// save policy to the specified file [root@localhost ~] # cat 123.txt-A-t 192.168.1.254:http-s rr-a-t 192.168.1.254:http-r 192.168.1.100:http-m-w 1max / confirm to save file contents [root@localhost ~] # ipvsadm- ipvsadm- / clear all policies [root@localhost ~] # ipvsadm-restore < 123.txt// Import Policy from specified File 3. Detailed description of NFS shared Storage Service
NFS is a network file system protocol based on TCP/IP transport. By using NFS protocol, clients can access shared resources on remote servers as if they were accessing local directories. For most load balancing clusters, it is very common to use NFS protocol to share data storage.
1. Publish shared resources using NFS
The implementation of NFS services depends on the RPC mechanism, and the remote-to-local mapping process has been completed. Two packages need to be installed to provide NFS sharing services.
[root@localhost ~] # yum-y install nfs-utils// for NFS sharing publishing and access [root@localhost ~] # yum-y install rpcbind// for RPC support 2. Set up a shared directory [root@localhost ~] # mkdir / a [root@localhost ~] # touch / a/123.txt// create directories and files for testing [root@localhost ~] # vim / etc/exports// fill in the following (the configuration file for the NFS service is / etc/exports) / a 192.168.1.0 (rw,sync) No_root_squash) or [root@localhost ~] # vim / etc/exports// fill in the following (the configuration file for NFS service is / etc/exports) / a 192.168.1.1 (rw,sync,no_root_squash) 192.168.1.10 (ro,sync)
In the configuration file
(/ a) is the local directory used for sharing; IP address, network segment address, allowed to use *,? Wildcard character Rw (read-write), ro (read-only), sync (synchronization), no_root_squash (indicates that the client is granted local root permission when accessed as root. The default is root_squash. Will be treated as nfsnobody user reduced right) [root@localhost ~] # systemctl start nfs [root@localhost ~] # systemctl start rpcbind// starts nfs service, rpcbind service [root@localhost ~] # showmount-e hand / view the shared directory Export list for localhost.localdomain:/a 192.168.1.10192.168.1.13 released by this machine. Client access test [root@localhost ~] # showmount-e 192.168.1.2 NFS service release Export list for 192.168.1.2 Export list for 192.168.1.10192.168.1.1 [root@localhost ~] # mount 192.168.1.2 a / b [root@localhost ~] # ls / b123.txt// is mounted to the local location, and check the content / / if synchronization permissions are enabled, any changes The NFS server and client will update content 4. 0 immediately. The client automatically mounts [root@localhost] # vim / etc/fstab... / / omit part of the content and fill in the following content: 192.168.1.2 nfs defaults,_netdev nfs defaults,_netdev 0 0//_netdev suggests adding a secondary parameter, indicating that network support is required; if you want to abandon the mount in the case of a network terminal, add the following parameters (soft, intr) to achieve soft mount
If there is anything else you need to know, you can find our professional technical engineer on the official website. The technical engineer has more than ten years of experience in the industry, so it will be more detailed and professional than the editor's answer. Official website link www.yisu.com
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 266
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.