In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the knowledge of "comparison of three load balancing methods of Linux virtual server LVS". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
1. The definition of LVS?
LVS is the abbreviation of Linux Virtual Server, which means Linux virtual server, which is a virtual server cluster system. In fact, it is a Cluster technology, using IP load balancing technology and content-based request distribution technology. The scheduler has a good throughput, it transfers requests to different servers evenly, and the scheduler automatically shields the failure of the server, thus forming a group of servers into a high-performance and highly available virtual server. The structure of the whole server cluster is transparent to customers, and there is no need to modify client-side and server-side programs. This project, founded by Dr. Zhang Wensong in May 1998, is one of the earliest free software projects in China.
For this reason, the transparency, scalability, high availability and manageability of the system need to be considered in the design. Generally speaking, the LVS cluster has a three-tier structure, and its architecture is shown in the figure:
Architecture of LVS Cluster
2. The main components of LVS are:
The load scheduler (load balancer/ Director), which is the front-end machine outside the entire cluster, is responsible for sending the customer's request to a set of servers for execution, and the customer thinks the service comes from an IP address (we can call it a virtual IP address).
Server pool (server pool/ Realserver) is a group of servers that actually execute customer requests. The services performed are generally WEB, MAIL, FTP, DNS, and so on.
Shared storage (shared storage), which provides a shared storage area for the server pool, which makes it easy for the server pool to have the same content and provide the same services.
3. LVS load balancing method:
Virtual Server via Network Address Translation NAT (VS/NAT)
VS/NAT is the easiest way, and all RealServer need to point their own gateway to Director. The client can be any operating system, but in this way, the amount of RealServer that a Director can drive is limited. In the VS/NAT way, the Director can also be used as a RealServer. The architecture of VS/NAT is shown in the figure.
The architecture of VS/NAT
Virtual Server via IP Tunneling (VS/TUN)
IP tunneling (IP tunneling) is a technology that encapsulates one IP message in another IP message, which enables data packets destined for one IP address to be encapsulated and forwarded to another IP address. IP tunneling technology is also known as IP encapsulation technology (IP encapsulation). IP tunnels are mainly used for mobile hosts and virtual private networks (Virtual Private Network), where tunnels are statically established. There is an IP address at one end of the tunnel and a unique IP address at the other end. Its connection scheduling and management are the same as those in VS/NAT, except that its packet forwarding method is different. According to the load of each server, the scheduler dynamically selects a server, encapsulates the request message in another IP message, and then forwards the encapsulated IP message to the selected server. After receiving the message, the server first unencapsulates the message with the original destination address of VIP. The server finds that the VIP address is configured on the local IP tunnel device, so it processes the request, and then returns the response message directly to the customer according to the routing table.
The architecture of VS/TUN
The workflow of VS/TUN:
Virtual Server via Direct Routing (VS/DR)
The VS/DR method is realized by rewriting the MAC address part of the request message. Director and RealServer must physically have a network card connected through an uninterrupted local area network. The VIP bound to the RealServer is configured on the network device of the respective Non-ARP, such as lo or tunl, and the VIP address of the Director is visible to the outside, while the VIP of the RealServer is not visible to the outside. The address of a RealServer can be either an internal address or a real address.
The architecture of VS/DR
The workflow of VS/DR
The workflow of VS/DR is shown in the figure: its connection scheduling and management are the same as those in VS/NAT and VS/TUN, and its message forwarding method is different, routing the message directly to the target server. In VS/DR, the scheduler dynamically selects a server according to the load of each server, neither modifies nor encapsulates the IP message, but changes the MAC address of the data frame to the MAC address of the selected server, and then sends the modified data frame on the local area network with the server group. Because the MAC address of the data frame is the selected server, the server is sure to receive the data frame from which the IP message can be obtained. When the server discovers that the destination address VIP of the message is on the local network device, the server processes the message and then returns the response message directly to the customer according to the routing table.
The workflow of VS/DR
4. Comparison of three load balancing methods:
Virtual Server via NAT
The advantage of VS/NAT is that the server can run any operating system that supports TCP/IP, it only needs an IP address configured on the scheduler, and the server group can use a private IP address. The disadvantage is that its scalability is limited, when the number of server nodes rises to 20:00, the scheduler itself may become a new bottleneck of the system, because in VS/NAT request and response messages need to go through the load scheduler. The average delay of rewriting packets measured on the host of the Pentium166 processor is 60us, and the delay is shorter on the processors with higher performance. Assuming that the average length of TCP packets is 536 Bytes, the maximum throughput of the scheduler is 8.93 MBytes/s. Let's assume that the throughput of each server is 800KBytes/s, and such a scheduler can drive 10 servers. (note: this is the data measured a long time ago)
The cluster system based on VS/NAT can meet the performance requirements of many servers. If the load scheduler becomes a new bottleneck in the system, there are three ways to solve this problem: hybrid approach, VS/TUN, and VS/DR. In the DNS hybrid cluster system, there are several VS/NAT negative schedulers, each with its own server cluster, and these load schedulers form a simple domain name through RR-DNS.
But VS/TUN and VS/DR are better ways to improve system throughput.
This will bring the workload of implementation, and the overhead of the application module checking packets will reduce the throughput of the system.
Virtual Server via IP Tunneling
In the cluster system of VS/TUN, the load scheduler only dispatches the request to different back-end servers, and the back-end server returns the answered data directly to the user. In this way, the load scheduler can handle a large number of requests, it can even schedule more than 100 servers (servers of the same size), and it will not become the bottleneck of the system. Even if the load scheduler has only the full-duplex network card of 100Mbps, the maximum throughput of the whole system can exceed 1Gbps. Therefore, VS/TUN can greatly increase the number of servers scheduled by the load scheduler. VS/TUN scheduler can schedule hundreds of servers, but it itself will not become the bottleneck of the system, and can be used to build high-performance super servers. VS/TUN technology requires that all servers support the "IP Tunneling" or "IP Encapsulation" protocols. Currently, VS/TUN 's back-end servers mainly run the Linux operating system, and we have not tested other operating systems. Because "IP Tunneling" is becoming the standard protocol for each operating system, VS/TUN should be suitable for back-end servers running other operating systems.
Virtual Server via Direct Routing
Like the VS/TUN method, the VS/DR scheduler only handles client-to-server connections, and response data can be returned directly from a separate network route to the client. This can greatly improve the scalability of the LVS cluster system. Compared with VS/TUN, this method does not have the cost of IP tunnel, but requires that the load scheduler and the actual server have a network card connected to the same physical network segment, the server network device (or device alias) does not respond to ARP, or can redirect the message to the local Socket port.
The advantages and disadvantages of the three LVS load balancing technologies are summarized as follows:
Note: the estimation of the maximum number of servers that the above three methods can support is based on the assumption that the scheduler uses a 100m network card, the hardware configuration of the scheduler is the same as that of the back-end server, and it is for general Web services. With higher hardware configurations (such as gigabit network cards and faster processors) as the scheduler, the number of servers that the scheduler can schedule increases accordingly. When the application is different, the number of servers will change accordingly. Therefore, the above data estimates are mainly for the quantitative comparison of the scalability of the three methods.
5. Load balancing scheduling algorithm
1) minimum connection method (Least Connection): pass new connections to servers that do the least connection processing. When one of the servers fails from layer 2 to layer 7, BIG-IP takes it out of the server queue and does not participate in the allocation of the next user request until it returns to normal.
2) fastest mode (Fastest): pass the connection to the server that responds the most quickly. When one of the servers fails from layer 2 to layer 7, BIG-IP takes it out of the server queue and does not participate in the allocation of the next user request until it returns to normal.
3) observation mode (Observed): the number of connections and response time are based on the best balance between the two items to select a server for new requests. When one of the servers fails from layer 2 to layer 7, BIG-IP takes it out of the server queue and does not participate in the allocation of the next user request until it returns to normal.
4) Forecast mode (Predictive): BIG-IP uses the current performance indicators of the collected server to carry out prediction and analysis, and selects a server in the next time slot, its performance will reach the best server corresponding to the user's request. (detected by BIG-IP)
5) dynamic performance allocation (Dynamic Ratio-APM): dynamically adjusts traffic allocation based on the performance parameters of applications and application servers collected by BIG-IP.
7) quality of service (QoS): data streams are assigned according to different priorities.
8) Service type (ToS): data flow is allocated by load balancer according to different service types (identified in Type of Field).
9) Rule mode: users can set guiding rules for different data streams.
This is the end of the content of "comparison of three load balancing methods of Linux virtual server LVS". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.