Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the principle and practice of implementing load balancing with LVS

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Many beginners are not very clear about the principle and practice of LVS load balancing. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.

The principle of load balancing

It was an ordinary morning in 1998.

As soon as he went to work, the boss called Zhang Dafang into the office, comfortably drinking tea and asking: "Dafang, why is this website developed by our company getting slower and slower now?"

Fortunately, Zhang Dapang also noticed this problem. He was well prepared and said helplessly, "well, I checked the system yesterday, and now the traffic is getting larger and larger. Both CPU, hard disk and memory are overwhelmed, and the response speed during peak periods is getting slower and slower."

After a pause, he asked tentatively, "Boss, can you buy a good machine? replace the current 'old and shabby' server. I heard that the IBM server is very good and has strong performance. Would you like one?"

"all right, you son of a bitch, do you know how expensive that machine is? our small company, we can't afford it!" The stingy boss immediately vetoed it. "this." Big fat means that you are running out of skills. "you go and discuss it with CTO Bill and work out a plan for me tomorrow."

The boss doesn't care about the process, as long as the result.

Hide the real server

Fat went to Bill angrily. He conveyed his boss's instructions passionately.

Bill smiled: "I've been thinking about this recently, and I'd like to discuss it with you to see if I can buy some cheap servers, deploy more copies of the system, and Scale Out it."

Scale out? Zhang Dalang thought to himself that if the system was deployed to several servers, users' access requests could be distributed to each server, and the pressure on a single server would be much less.

"but," asked Zhang Dafang, "if there are more machines, each machine has one IP, and the user may be confused. Which one do you visit?"

"certainly can not expose these servers, from the customer's point of view, it is best to have only one server." Bill said.

As soon as Zhang Dafang's eyes lit up, he suddenly had an idea: "got it!" We have a middle layer, right, that is, DNS. We can set it up so that the domain name of our website is mapped to the IP of multiple servers, and the user faces the domain name of our system, and then we can use a polling method. When user 1's machine does domain name resolution, DNS returns IP1, and user 2's machine does domain name resolution, DNS returns IP2. In this way, the load of each machine can be relatively balanced. "

Bill thought for a moment and found a loophole: "there is a fatal problem in doing so. Because there is a cache in DNS, a hierarchical system, and the client machine also has a cache, if a machine fails, the domain name resolution will still return the IP of the machine in question. Then all users who access the machine will have a problem, even if we delete the IP of this machine from the DNS. This is troublesome."

Zhang Dafang really didn't think of the problem caused by this cache. He scratched his head: "that's not easy to do."

place a substitute by subterfuge

"how about we develop our own software to achieve load balancing?" Bill found another way.

To show his idea, he drew a picture on the whiteboard and said, "see the blue server in the middle? we can call it Load Balancer (LB for short). All the user's requests are sent to him, and then it is sent to each server."

Zhang Dapang examined the picture carefully.

Load Balancer, abbreviated as LB, has two IP, one external (115.39.19.22) and one internal (192.168.0.100). What the user sees is the external IP. There are three real service servers behind, called RS1 and RS2,RS3, whose gateways all point to LB.

"but how do you forward the request? well, what exactly is the user's request?" Zhang Dafang is confused.

"have you forgotten all about the computer network? it's the data packet sent by the user! look at this layer-by-layer encapsulated data packet. The user sends a HTTP request to visit the home page of our website. The HTTP request is placed in a TCP message and then placed in an IP Datagram. The final destination is our Load Balancer (115.39.19.22)."

(note: the data packet sent by the customer to LB does not draw the frame of the data link layer)

"but at first glance, this packet is sent to Load Balancer. How can it be sent to the server behind it?"

"it can be changed," Bill said. "for example, if Load Balancer wants to send this packet to RS1 (192.168.0.10), it can tamper with the packet and change it to this, and then the IP packet can be forwarded to RS1 for processing."

(LB tampered with the destination IP and port to RS1)

"when the RS1 is finished, you need to return to the HTML on the home page and encapsulate the HTTP message layer by layer:" Zhang Dafang understands what's going on:

(after RS1 processing, send the result to the client.)

"because LB is the gateway, it will also receive this packet, so it can use the means again to replace the source address and source port with its own, and then send it to the customer."

(LB tampered with it again to change the source address and port to its own, leaving the client unaware.)

Zhang Dalang summed up the data flow: client-> Load Balancer-> RS-> Load Balancer-> client.

"it's a wonderful trick to cover it up," he said excitedly. "the client doesn't feel that there are several servers working behind it, and it always thinks that only Load Balancer is working."

Bill is now thinking about how Load Balancer can select the real servers behind, and there are many strategies, he wrote on the whiteboard:

Polling: this is the simplest, which is to rotate one by one.

Weighted polling: in order to deal with some servers with good performance, you can make them have a higher weight and have a higher chance of being selected.

Minimum connections: the server that handles fewer connections will be sent to anyone.

Weighted least connection: add weight on the basis of the least connection. There are some other algorithms and strategies that you can think about later.

Fourth floor or seventh floor?

Zhang Dapang thought of another problem: for a user's request, it may be divided into multiple packets to be sent. If these packets are sent to different machines by our Load Balancer, it will be completely messed up. He told Bill what he thought.

"this is a good question," Bill said. "our Load Balancer has to maintain a table that records which real server the client packet was forwarded to, so that when the next packet arrives, we can forward it to the same server."

"it seems that this load balancing software needs to be connection-oriented, that is, layer 4 of the OSI network architecture, which can be called four-layer load balancing," Bill made a summary.

"since there is a four-tier load balancer, is it possible to have a seven-tier load balancer?" Zhang Dafang had a whim.

"that's for sure. If our Load Balancer takes out the message data in the HTTP layer and distributes the request to the real server according to the URL, browser, language and other information, it will be a seven-tier load balancer. But let's implement a four-tier one at this stage, and we'll talk about it later."

Bill instructed Zhang Dafang to organize manpower to develop this load balancing software.

Zhang Dafang dared not snub. Because of the details of the agreement, Zhang also bought several books: volumes 1, 2 and 3 of the detailed interpretation of TCP/IP, took people to quickly review the C language, and then began to develop crazily.

Separation of responsibility

Three months later, the first version of Load Balancer, a software running on Linux, was developed, and the company tried it out, and it felt really good that load balancing could be achieved with just a few cheap servers.

When the boss saw that the problem was solved without spending much money, he was so satisfied that he gave a bonus of 1000 yuan to Zhang Dafang's development group and organized everyone to go out for a meal.

Zhang Dafang, they saw that the boss was very stingy, although they were slightly dissatisfied, but they endured the thought that they had learned a lot of low-level knowledge through the development of this software, especially the TCP protocol.

But the good times did not last long, Zhang Dafang found that this Load Balancer has this bottleneck: all traffic has to pass through it, it needs to modify the data packets sent by customers, but also modify the data packets sent to customers.

There is also a great feature of network access, that is, the request message is short and the response message often contains a lot of data. It's easy to understand that a HTTP GET request is pitifully short, but the returned HTML is extremely long-which further aggravates Load Balancer's work on modifying the packet.

Zhang Dafang hurriedly went to Bill. Bill said, "this is indeed a problem. Let's separate the request from the response, let the Load Balancer process only the request, and let each server send the response directly to the client. Won't the bottleneck be eliminated?"

"how do you deal with it separately?"

"first of all, let all servers have the same IP. Let's call it VIP (see figure 115.39.19.22)."

Zhang Dalang has accumulated a wealth of experience through the development of the first edition of Load Balancer.

He asked: "you are binding the loopback of each actual server to that VIP, but there is a problem. So many servers have the same IP. When the IP packet comes, which server should handle it?"

"notice that IP packets are actually sent through the data link layer. Look at this picture."

Zhang Dafang saw that the HTTP message of the client was encapsulated again by the reservoir TCP message, the port number was 80, and then the destination in the IP Datagram was 115.39.19.22 (VIP).

The question mark in the picture is the MAC address of the destination. How can I get it?

Yes, using the ARP protocol, broadcast an IP address (115.39.19.22), and then the IP machine will reply to its own MAC address. But now there are several machines with the same IP (115.39.19.22). What should I do?

Bill said: "We only let Load Balancer respond to the ARP request of this VIP address (115.39.19.22). For RS1,RS2,RS3, suppressing the ARP response to this VIP address, can't we uniquely determine the Load Balancer?"

I see! Zhang Dafang suddenly realized.

Now that Load Balancer has got the IP packet, it can use some strategy to select a server from RS1 and RS2,RS3, such as RS1 (192.168.0.10), and encapsulate the IP Datagram as a packet at the data link layer (destined for the MAC address of RS1), and forward it directly.

RS1 (192.168.0.10) the server received the packet and took a look at it. The destination IP is 115.39.19.22, which is its own IP, so it can be processed.

After processing, RS1 can respond directly to the client and send it back to the client without going through Load Balancer at all. Because my address is 115.39.19.22.

For the client, it sees the only address 115.39.19.22 and doesn't know what's going on in the background.

Bill added: "since Load Balancer will not modify the IP Datagram at all, the port number of the TCP in it will not be changed, which requires that the port number on RS1 and RS2,RS3 must be the same as that of Load Balancer."

As before, Zhang summed up the flow of the data:

Client-> Load Balancer-> RS-> client

Bill said, "what do you think? is this all right?"

Zhang Dafang thought again that this approach seems to have no loopholes and is very efficient. Load Balancer is only responsible for sending user requests to specific servers and everything will be fine. The rest will be handled by specific servers and has nothing to do with it.

He said happily, "Yes, I set out to take someone to achieve it."

Postscript: what is described in this article is actually the principle of the famous open source software LVS. The two load balancing methods mentioned above are LVS's NAT and DR. LVS is a free software project founded by Dr. Zhang Wensong in May 1998 and is now part of the Linux kernel. Considering that I was still happily messing around with my personal web pages at that time, it wasn't long before I learned to install and use Linux, and server-side development was limited to ASP. I had never heard of the concept of load balancing like LVS. The programming language can be learned and the gap can be made up, but the gap between this realm and vision is simply a huge gap that is difficult to cross. Reading story notes: I have also read several articles about LVS. I often just remember the concept and cannot put myself in the shoes of why. Teacher Liu Xin can always go back to the scene set by the characters, let me go back to that time to think, deduce, and restore how it evolved into the later LVS step by step.

I have also been involved in software development for more than ten years, most of the time is to do software development in the field of the industry, self-comfort is to do the cross-field of the xx industry and the computer industry, but has not been able to go deep into the field of computer systems. Industry application software development, industry knowledge itself involves too much energy, software development more choose a suitable architecture to complete the system design, development and maintenance. If you want to become a computer master, you should have the opportunity to do in-depth research in a certain field of computer, such as Linux kernel, search, graphics and images, database, distributed storage, and, of course, artificial intelligence.

Distributed Architecture practice-- load balancing

Maybe when I get old, I also write code, not for anything else, but for hobbies.

1 what is load balancing (Load balancing)

In the early days of the website, we usually use a single machine to provide centralized services, but with the increasing volume of business, there are greater challenges in terms of performance and stability. At this time, we will think of expanding capacity to provide better services. We usually form a cluster of multiple machines to provide services. However, our website provides the same access point, such as www.taobao.com. So how do users distribute their requests to different machines in the cluster when they enter www.taobao.com in the browser? this is what load balancer is doing.

At present, most Internet systems use server cluster technology, which means that the same services will be deployed on multiple servers to provide services as a whole. These clusters can be Web application server clusters, database server clusters, distributed cache server clusters and so on.

In practical applications, there will always be a load balancer server before the Web server cluster. The task of the load balancer device is to act as the entrance of Web server traffic, select the most suitable Web server, forward the client's request to it for processing, and achieve transparent forwarding from the client to the real server. Cloud computing and distributed architecture, which are very popular in recent years, essentially use back-end servers as computing resources and storage resources, which are encapsulated by a management server into a service for external provision. the client does not need to care about which machine is really providing the service, as if it is facing a server with almost unlimited capacity, and in essence, it is the back-end cluster that really provides the service. The two core problems solved by software load are: who to choose and forward, the most famous of which is LVS (Linux Virtual Server).

The topology of a typical Internet application is as follows:

2 classification of load balancing

We now know that load balancing is a computer network technology that is used to distribute load among multiple computers (computer clusters), network connections, CPU, disk drives, or other resources to optimize resource usage, maximize throughput, minimize response time, and avoid overload. Then, there are many ways to realize this computer technology. It can be divided into the following categories, of which the most commonly used are layer-4 and layer-7 load balancers:

The layer-2 load balancer server still provides a VIP (virtual IP). Different machines in the cluster use the same IP address, but the MAC address of the machine is different. When the load balancing server receives the request, it rewrites the destination MAC address of the message and forwards the request to the target machine to achieve load balancing.

Layer 3 load balancer is similar to layer 2 load balancer. The load balancer server still provides a VIP (virtual IP), but different machines in the cluster use different IP addresses. When the load balancing server receives the request, it forwards the request to different real servers through IP according to different load balancing algorithms.

Layer 4 load balancer works in the transport layer of OSI model. Because there is only TCP/UDP protocol in the transport layer, these two protocols include not only source IP and destination IP, but also source port number and destination port number. After receiving the request from the client, the layer-4 load balancer server forwards the traffic to the application server by modifying the address information (IP+ port number) of the packet.

Layer 7 load balancer works at the application layer of the OSI model. There are many application layer protocols, such as http, radius, dns and so on. Seven layers of load can be loaded based on these protocols. There will be a lot of meaningful content in these application layer protocols. For example, for the load balancer of the same Web server, in addition to load balancing based on IP plus port, you can also decide whether to load balancer based on URL, browser category and language at layer 7.

For general applications, Nginx is enough. Nginx can be used for layer 7 load balancer. However, for some large websites, DNS+ four-layer load + seven-layer load is generally used for multi-level load balancing.

3 commonly used load balancing tools

Hardware load balancing has superior performance and comprehensive functions, but it is expensive, so it is generally suitable for early or long-term use by Tuhao companies. Therefore, software load balancing is widely used in the Internet field. The commonly used software load balancing software is Nginx,Lvs,HaProxy and so on. Nginx/LVS/HAProxy is the three most widely used load balancing software at present.

3.1 LVS

LVS (Linux Virtual Server), also known as Linux virtual server, is a free software project initiated by Dr. Zhang Wensong. The goal of using LVS technology is to achieve a high-performance, high-availability server cluster through the load balancing technology provided by LVS and the Linux operating system, which has good reliability, scalability and maneuverability. In order to achieve the optimal service performance at a low cost. LVS is mainly used for layer 4 load balancing.

The server cluster system built by LVS architecture LVS consists of three parts: the front-end load balancing layer (Loader Balancer), the middle server group layer, represented by Server Array, and the lowest data sharing storage layer represented by Shared Storage. In the eyes of users, all applications are transparent, and users are just using the high-performance services provided by a virtual server.

Architecture of LVS. PNG

A detailed description of the various levels of LVS:

Load Balancer layer: located at the front end of the whole cluster system, it is composed of one or more load schedulers (Director Server). The LVS module is installed on the Director Server, while the main function of Director is similar to a router. It contains routing tables set for completing the LVS function, through which users' requests are distributed to the application server (Real Server) in the Server Array layer. At the same time, the monitoring module Ldirectord for Real Server services should be installed on Director Server, which is used to monitor the health status of each Real Server service. Remove Real Server from the LVS routing table when it is not available and rejoin it when it is restored.

Server Array layer: consists of a group of machines that actually run application services. Real Server can be one or more of WEB servers, MAIL servers, FTP servers, DNS servers, and video servers. Each Real Server is connected by high-speed LAN or distributed WAN. In practical applications, Director Server can also act as Real Server at the same time.

Shared Storage layer: a storage area that provides shared storage space and content consistency for all Real Server. Physically, it is generally composed of disk array devices. In order to provide content consistency, data can be shared through the NFS network file system, but the performance of NFS is not very good in busy business systems. At this time, you can use cluster file systems, such as Red hat's GFS file system. OCFS2 file system provided by oracle, etc.

From the whole LVS structure, we can see that Director Server is the core of the whole LVS. At present, the operating system for Director Server can only be Linux and the FreeBSD,linux2.6 kernel can support LVS functions without any settings, while there are not many applications of FreeBSD as Director Server, and the performance is not very good. For Real Server, almost all system platforms, Linux, windows, Solaris, AIX, BSD series can be well supported.

3.2 Nginx

Nginx (pronounced engine x) is a web server that can reverse proxy protocol links for HTTP, HTTPS, SMTP, POP3, IMAP, as well as a load balancer and a HTTP cache. Nginx is mainly used for layer 7 load balancing. Concurrent performance: official support for 50, 000 concurrency per second, the actual domestic generally to 20, 000 concurrency per second, there are optimized to 100000 concurrency per second. The specific performance depends on the application scenario.

Characteristics

Modular design: good expansibility, can be expanded through the module way. High reliability: the master process and worker are implemented synchronously. If there is a problem with one worker, another worker will be started immediately. Low memory consumption: ten thousand long connections (keep-alive), consuming only 2.5MB memory. Support hot deployment: do not stop the server, update the configuration file, change the log file, update the server program version. Strong concurrency: official data supports 50, 000 concurrency per second; feature-rich: excellent reverse proxy function and flexible load balancing strategy basic working mode of Nginx

The basic working mode of Nginx. Jpg

A master process that generates one or more worker processes. But here master is started using the root identity, because nginx works on port 80. Only the administrator has permission to start ports less than 1023. Master is mainly responsible for starting worker, loading configuration files, and being responsible for the smooth upgrade of the system. The rest of the work is given to worker. So when worker is started, it is only responsible for some of the simplest tasks of web, while the rest of the work is done by modules called in worker. The function is realized by pipelining between modules. Pipeline, which refers to a user request, is completed by multiple modules combining their respective functions in turn. For example, the first module is only responsible for analyzing the first part of the request, the second module is only responsible for finding data, and the third module is only responsible for compressing data and completing their respective work in turn. To achieve the completion of the whole job. How do they achieve hot deployment? In fact, we said earlier that master is not responsible for specific work, but calls worker work, he is only responsible for reading the configuration file, so when a module is modified or the configuration file changes, it is read by master, so the worker work will not be affected at this time. After master reads the configuration file, worker is not immediately informed of the modified configuration file. Instead, let the modified worker continue to work with the old configuration file, and when the worker is finished, directly kill the child process, replace the new child process, and use the new rules.

3.3 HAProxy

HAProxy is also a widely used load balancing software. HAProxy provides high availability, load balancing and agents based on TCP and HTTP applications, supports virtual hosts, and is a free, fast and reliable solution. It is especially suitable for those web sites with heavy load. The operating mode allows it to be easily and securely integrated into the current architecture while protecting your web server from being exposed to the network. HAProxy is a free and open source software written in C language that provides high availability, load balancing, and TCP and HTTP-based application proxies. Haproxy is mainly used for layer 7 load balancing.

4 common load balancing algorithms

When introducing the load balancing technology above, it is mentioned that when the load balancing server decides which real server to forward the request to, it is realized through the load balancing algorithm. Load balancing algorithms can be divided into two categories: static load balancing algorithm and dynamic load balancing algorithm.

Static load balancing algorithms include: polling, ratio, priority

Dynamic load balancing algorithms include: minimum number of connections, fastest response speed, observation method, prediction method, dynamic performance allocation, dynamic server supplement, quality of service, service type, rule pattern.

Polling (Round Robin): a sequential loop will request a sequential loop to connect to each server. When one of the servers fails from layer 2 to layer 7, BIG-IP takes it out of the sequential loop queue and does not participate in the next poll until it returns to normal. Request to schedule different servers in turn by polling; when implemented, the server is generally given a weight; this has two advantages: different loads can be distributed according to the performance differences of the server; when a node needs to be removed, just set its weight to 0; advantages: simple and efficient; easy to scale horizontally Disadvantages: uncertainty of request to destination node makes it inapplicable to scenarios with write (cache, database write): scenarios with only read in database or application service layer; random mode: requests are randomly distributed to each node; a balanced distribution can be achieved in scenarios with large enough data; advantages: simple to implement and easy to scale horizontally; disadvantages: same as Round Robin, can not be used in scenarios with writes Application scenario: database load balancing is also a read-only scenario

Hash method: calculate the nodes that need to be located according to key, which ensures that the same key must fall on the same server; advantages: the same key must fall on the same node, so it can be used for cache scenarios with write and read; disadvantages: after a node failure, it will lead to the redistribution of hash keys, resulting in a significant reduction in hit rate Solution: consistent hash or uses keepalived to ensure the high availability of any node, and other nodes will come up after failure; application scenario: cache, read and write

Consistent hash: when a server node fails, only the key on that node is affected to maximize the hit rate, such as the ketama scheme in twemproxy; you can also plan a specified sub-key hash in the production implementation to ensure that the keys with local similar features can be distributed on the same server; advantages: limited drop in hit rate after node failure; application scenario: cache

Load according to the range of keys: according to the range of keys, the first 100 million keys are stored in the first server, 100 million to 200 million in the second node; advantages: easy horizontal expansion, when storage is insufficient, add servers to store subsequent new data; disadvantages: uneven load; uneven distribution of databases (there is a distinction between hot and cold data, and the recently registered users are generally more active, resulting in a very busy server and a lot of idle nodes.) applicable scenario: database shard load balancing

Load according to the number of key-pair server nodes: load according to the number of key-pair server nodes; for example, there are four servers, key module 0 falls on the first node, 1 falls on the second node. Advantages: hot and cold distribution of data, load balance of database nodes; disadvantages: it is difficult to scale horizontally; applicable scenarios: database fragmentation load balancing

Pure dynamic node load balancing: decide how to schedule the next request according to the processing capacity of CPU, IO and network; advantages: make full use of server resources to ensure load processing balance on each node; disadvantages: complex to implement, less real use

No active load balancing: use the message queue to switch to an asynchronous model to eliminate the problem of load balancing; load balancing is a push model that sends data to you all the time, then all user requests are sent to the message queue, all downstream nodes who are idle and who come up for data processing; after switching to the pull model, the problem of load on downlink nodes is eliminated Advantages: protect the back-end system through message queue buffering, and do not destroy the back-end server when the request surge; it is easy to scale horizontally and directly take queue after adding new nodes; disadvantages: not real-time; application scenarios: scenarios that do not need to be returned in real time; for example, immediately return a prompt message after placing an order: your order is queued up. Notify asynchronously after the processing is finished.

Ratio (Ratio): each server is assigned a weighting value as a proportion, according to which user requests are assigned to each server. When one of the servers fails from layer 2 to layer 7, BIG-IP takes it out of the server queue and does not participate in the allocation of the next user request until it returns to normal.

Priority (Priority): all servers are grouped, priority is defined for each group, and requests from BIG-IP users are assigned to the highest priority server group (within the same group, polling or ratio algorithm is used to assign users' requests); when all servers in the highest priority fail, BIG-IP sends the request to the server group with lower priority. In this way, it actually provides a way of hot backup for users.

Minimum connection method (Least Connection): pass new connections to servers that do the least connection processing. When one of the servers fails from layer 2 to layer 7, BIG-IP takes it out of the server queue and does not participate in the allocation of the next user request until it returns to normal.

Fastest mode (Fastest): passes the connection to the server that responds most quickly. When one of the servers fails from layer 2 to layer 7, BIG-IP takes it out of the server queue and does not participate in the allocation of the next user request until it returns to normal.

Observation mode (Observed): the number of connections and response time are based on the best balance between the two items to select a server for new requests. When one of the servers fails from layer 2 to layer 7, BIG-IP takes it out of the server queue and does not participate in the allocation of the next user request until it returns to normal.

Prediction mode (Predictive): BIG-IP uses the current performance indicators of the collected server to carry out prediction analysis, and selects a server in the next time slot, its performance will reach the best server corresponding to the user's request. (detected by BIG-IP)

Dynamic performance allocation (Dynamic Ratio-APM): BIG-IP collects performance parameters of applications and application servers to dynamically adjust traffic allocation.

Dynamic server supplement (Dynamic Server Act.): dynamically replenish the backup server to the primary server farm when the number in the primary server farm decreases due to a failure.

Quality of service (QoS): data streams are assigned according to different priorities.

Type of service (ToS): data flows are allocated by load balancing according to different types of service (identified in Type of Field).

Rule mode: set guiding rules for different data streams, users can do so on their own.

Several load balancing algorithms Java implements the code polling package com.boer.tdf.act.test; import java.util.ArrayList;import java.util.HashMap;import java.util.Map;import java.util.Set; / * load balancing algorithm. * / public class TestRoundRobin {static MapserverWeigthMap = new HashMap (); static {serverWeigthMap.put ("192.168.1.12", 1); serverWeigthMap.put ("192.168.1.13", 1); serverWeigthMap.put ("192.168.1.14", 2); serverWeigthMap.put ("192.168.1.15", 2); serverWeigthMap.put ("192.168.1.16", 3); serverWeigthMap.put ("192.168.1.17", 3) ServerWeigthMap.put ("192.168.1.18", 1); serverWeigthMap.put ("192.168.1.19", 2);} Integer pos = 0; public String roundRobin () {/ / re-establish a map to avoid concurrent problems caused by server uplink and downlink problems MapserverMap = new HashMap (); serverMap.putAll (serverWeigthMap); / / fetch ip list list SetkeySet = serverMap.keySet (); ArrayListkeyList = new ArrayList (); keyList.addAll (keySet); String server = null Synchronized (pos) {if (pos > = keySet.size ()) {pos = 0;} server = keyList.get (pos); pos +;} return server;} public static void main (String [] args) {TestRoundRobin robin = new TestRoundRobin (); for (int I = 0; I < 20; iTunes +) {String serverIp = robin.roundRobin (); System.out.println (serverIp) }} / * Operation result: * 192.168.1.12 192.168.1.14 192.168.1.13 192.168.1.16 192.168.1.15 192.168.1.18 192.168.1.17 192.168.1.19 192.168.1.12 192.168.1.14 192.168.1.13 192.168.1.16 192.168.1.15 192.168.1.18 192.168. 1.17 192.168.1.19 192.168.1.12 192.168.1.14 192.168.1.13 192.168.1.16 * /} weighted Random load balancing algorithm package com.boer.tdf.act.test Import java.util.*; / * weighted random load balancing algorithm. * / public class TestWeightRandom {static MapserverWeigthMap = new HashMap (); static {serverWeigthMap.put ("192.168.1.12", 1); serverWeigthMap.put ("192.168.1.13", 1); serverWeigthMap.put ("192.168.1.14", 2); serverWeigthMap.put ("192.168.1.15", 2); serverWeigthMap.put ("192.168.1.16", 3); serverWeigthMap.put ("192.168.1.17", 3) ServerWeigthMap.put ("192.168.1.18", 1); serverWeigthMap.put ("192.168.1.19", 2);} public static String weightRandom () {/ / re-establish a map to avoid the concurrency problem MapserverMap = new HashMap () caused by server downline and uplink; serverMap.putAll (serverWeigthMap); / / fetch ip list list SetkeySet = serverMap.keySet (); Iteratorit = keySet.iterator (); ListserverList = new ArrayList () While (it.hasNext ()) {String server = it.next (); Integer weight = serverMap.get (server); for (int I = 0; I < weight; iTunes +) {serverList.add (server);}} Random random = new Random (); int randomPos = random.nextInt (serverList.size ()); String server = serverList.get (randomPos); return server;} public static void main (String [] args) {String serverIp = weightRandom (); System.out.println (serverIp) / * run result: * 192.168.1.16 * /}} Random load balancing algorithm package com.boer.tdf.act.test; import java.util.*; / * * Random load balancing algorithm. * / public class TestRandom {static MapserverWeigthMap = new HashMap (); static {serverWeigthMap.put ("192.168.1.12", 1); serverWeigthMap.put ("192.168.1.13", 1); serverWeigthMap.put ("192.168.1.14", 2); serverWeigthMap.put ("192.168.1.15", 2); serverWeigthMap.put ("192.168.1.16", 3); serverWeigthMap.put ("192.168.1.17", 3) ServerWeigthMap.put ("192.168.1.18", 1); serverWeigthMap.put ("192.168.1.19", 2);} public static String random () {/ / re-establish a map to avoid the concurrency problem MapserverMap = new HashMap () caused by server downline and uplink; serverMap.putAll (serverWeigthMap); / / fetch ip list list SetkeySet = serverMap.keySet (); ArrayListkeyList = new ArrayList (); keyList.addAll (keySet); Random random = new Random () Int randomPos = random.nextInt (keyList.size ()); String server = keyList.get (randomPos); return server;} public static void main (String [] args) {String serverIp = random (); System.out.println (serverIp);} / * running result: * 192.168.1.16 * /} load balancing ip_hash algorithm package com.boer.tdf.act.test; import java.util.ArrayList;import java.util.HashMap;import java.util.Map;import java.util.Set / * ip_hash algorithm for load balancing. * / public class TestIpHash {static MapserverWeigthMap = new HashMap (); static {serverWeigthMap.put ("192.168.1.12", 1); serverWeigthMap.put ("192.168.1.13", 1); serverWeigthMap.put ("192.168.1.14", 2); serverWeigthMap.put ("192.168.1.15", 2); serverWeigthMap.put ("192.168.1.16", 3); serverWeigthMap.put ("192.168.1.17", 3) ServerWeigthMap.put ("192.168.1.18", 1); serverWeigthMap.put ("192.168.1.19", 2);} / * * obtain the request server address * @ param remoteIp load balancer server ip * @ return * / public static String ipHash (String remoteIp) {/ / re-establish a map to avoid concurrent problems caused by server uplink and downlink MapserverMap = new HashMap (); serverMap.putAll (serverWeigthMap) / / fetch the ip list list SetkeySet = serverMap.keySet (); ArrayListkeyList = new ArrayList (); keyList.addAll (keySet); int hashCode = remoteIp.hashCode (); int serverListSize = keyList.size (); int serverPos = hashCode% serverListSize; return keyList.get (serverPos);} public static void main (String [] args) {String serverIp = ipHash ("192.168.1.12"); System.out.println (serverIp) / * run result: * 192.168.1.18 * /}} is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 253

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report