Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Some solutions to achieve high availability ideas and load balancing

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

The following brings you some ideas and solutions to achieve high availability and load balancing, hoping to give you some help in practical application. Load balancing involves many things, there are not many theories, and there are many books on the Internet. today, we will use the accumulated experience in the industry to do an answer.

Why is there no DNS?

Turn out a picture released in the first article to review it.

Cdn.xitu.io/2018/10/22/1669bee11791a171?w=1099&h=1043&f=png&s=76592 ">

Before, some friends asked why DNS was not listed. In my opinion, the essence of DNS is to solve the problem of "domain name-> ip". Although DNS is not only used in the public network, but also used to do custom domain name parsing in the private network, it is still too reluctant to rely on it to do load balancing in the program.

Of course, the dynamic return of IP can be achieved based on the "intelligent parsing" function of DNS, which also plays a role in load balancing. However, because it is itself a solution that works at L3 (network layer), it cannot work on the Port. In general, many of the communication between our programs will involve ports, so we will not discuss DNS~ in this article.

Second, how to implement it?

After knowing which links we should consider to do load balancing, the next step is to think about how to proceed step by step.

In ancient times, when the army fought a war, it was usually the resistance with a shield in front of it to withstand the war. From a certain point of view, the load balancing solution is also a defensive facility similar to a shield, because the premise is to be able to carry upstream traffic. Therefore, the more the "front" to do the load balancing solution, the better the effect, because the wider the range of protected applications.

If we say that the system has not used load balancing before and now starts to do it for the first time, how to choose? Little Z will talk to you according to the priority in mind.

01 hardware load balancing

The most famous piece of hardware is F5 (which has a market share of 51.44%, according to ZOL), which has greatly overshadowed the limelight of several other hardware vendors. The characteristic of this kind of hardware load balancer is "trench". After all, it is purely commercial, and the resources and energy invested are naturally incomparable to that of many open source software load balancers, so they are very powerful. It includes many additional functions beyond load balancing such as access acceleration, compression, security, and so on.

Aside from the question: if you use F5 to build a network, there are two structures, serial structure and parallel structure, also known as direct connection mode and bypass mode. The former has the advantages of less pressure on the hardware and high natural security, while the latter has less changes to the original network architecture and better scalability. Everyone in the actual use of their own situation to deploy.

Trench objects can support L2~L7 forwarding at the same time, so each annotation point in the above figure can be load balanced with hardware. Therefore, if the economy permits, going directly to F5 can solve many problems that would have taken more time to solve. Therefore, when the importance of "time" is greater than "money", it is recommended to give priority to hardware solutions.

02 Software load balancing (L7)

When "money" is more important than "time", we can achieve the effect we want through software. Accordingly, some operation and maintenance costs have been increased.

In general, as long as we do not abuse the database, we often need to break through the application first from the combination of "single application + single database" to "multi-application + single database". Then for the L7 load balancing of Web applications, the more mainstream products are 2 Nginx and HAProxy. The most important feature of load balancer in L7 is flexibility. We can control the requested URL and Header, so we can use any information in it for load balancing strategy.

This category is the "reverse proxy" in the previous figure. Act as a bridge between "client" and "Web application", "front end" and "back end". Two main steps are taken in the actual operation:

In the domain name resolution of the public network, configure resolution to "reverse proxy". The record type is "A" and the record value is the IP of "reverse proxy".

Configure Web application IP and ports that provide real services, and load balancing policy. The configuration in the figure above is an example in Nginx, and the default value of the load balancing policy is polling.

03 Software load balancing (L4)

When the "service" of the TCP protocol that "Web application" depends on needs to scale out, or needs to be a multi-master and master-slave cluster of "database" and "distributed cache", then a load balancing software that supports L4 is needed. The most famous one here is LVS, which was founded by Dr. Zhang Wensong in May 1998 and incorporated into the Lunix kernel at the end of 2004. Because it is a kernel program, it is better in performance and resource consumption than using Nginx and HAProxy to do L4 load balancing.

The main operation steps in practical application are 2 steps:

1 add an IP virtual service (IPVS) to LVS and specify its IP, port, and load balancing policy.

Associate the IP virtual service with the real service and specify the pattern and weight information. (you can use NAT or FULLNAT mode for L4 load balancing.)

Aside from the topic: there are four modes of LVS. In addition to NAT and FULLNAT (an enhanced version of NAT), its TUN mode can do load balancing in L3 and DR mode can do load balancing in L2. At this level, it is actually at the same level as hardware. And, with the deepening of the level, although the functionality is weakened, if you do not consider the port, only from the IP level of load balancing, using DR mode, the processing involvement of data packets will be reduced to the lowest, so it is also the performance that can be achieved through software load balancing.

In addition, the concept of virtual IP used in LVS, which is essentially the same as the concept of "server" in Nginx, defines a unified entrance, and there is no difference in function. Associate the upstream in the Nginx to the server, just like the association in point 2 of the LVS procedure.

There are more tutorials for the use of each specific solution online, so it will not be carried out. When you actually use it, check it yourself, of course, give priority to the official ones as far as possible.

Third, advantages and disadvantages

I did the hard work of integrating the advantages and disadvantages and usage scenarios of all the products of the same type. However, there are many of them that I have never used before, so it is only for your reference. Easily updated some outdated conclusions that are everywhere on the Internet, such as: Nginx does not support session sticky and so on.

We can see that different solutions have different priorities. Therefore, in the case that a single solution can no longer be satisfied, we can use it together and do our best.

In the field of load balancing, high availability and performance are the two most important factors. Here is a combination recommended by Xiao Z, which is also the most widely used after the system reaches hundreds of millions of PV per hour. In theory, taking advantage of the load balancing effect of domain name resolution in the first step of DNS, as long as you copy multiple sets of LVS master / slave and tie multiple different virtual IP, you can achieve unlimited scale-out to support the growing traffic.

The three software used are currently open source products. LVS+Keepalived is responsible for load balancing of Nginx, while Nginx is responsible for distributing actual requests to applications of Http and Tcp protocols.

With regard to the mode selection of LVS, if the DR mode is preferred for L2 forwarding within the same network segment, the performance is the best. Otherwise, use TUN mode for L3 distribution. At the same time, using Nginx in the distribution of L4 and L7 can give full play to its flexible and easy to expand characteristics as well as some other additional features such as cache.

In the cloud era, the service mesh wind is on the rise. Sidecar mode as the core rising stars Linkerd, Conduit, NginMesh, Istio and other software not only meet the load balancing, but also do a lot of considerations for high availability, and then have the opportunity to sort it out with you. A long time ago, I wrote an article investigating the service governance framework, in which it was mentioned that interested friends can skip over and take a look: "essential medicine in distributed systems-service governance."

After reading the above ideas and load balancing solutions, if there is anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineers for answers. Technical engineers have more than ten years of experience in the industry.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report