In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the relevant knowledge of "distributed system load balancing case Analysis". The editor shows you the operation process through the actual case, and the operation method is simple, fast and practical. I hope this "distributed system load balancing case Analysis" article can help you solve the problem.
What is load balancing?
Remember that the first time I came into contact with Nginx was in the lab, when you needed to use Nginx to deploy the website on the server. Nginx is a service component for reverse proxying, load balancing, HTTP caching, and so on. So what is the load balancing here?
Load balancing (LB,Load Balance) is a technical solution. It is used to distribute load among multiple resources (usually servers) to optimize the use of resources and avoid overload.
Resource, which is equivalent to the execution operation unit of each service instance, load balancing is to allocate a large number of data processing operations to multiple operation units for execution, which is used to solve the problems of high traffic, high concurrency and high availability of Internet distributed systems. So what is high availability?
What is high availability?
First of all, understand what is high availability?
This is the CAP theorem is the basis of distributed systems, but also the three indicators of distributed systems:
Consistency (consistency)
Availability (availability)
Partition tolerance (Partition Fault tolerance)
What is high availability (High Availability)? High availability, referred to as HA, is a characteristic or index of the system, which usually refers to the running time of services with certain performance, which is higher than the average normal time period. On the contrary, eliminate the time when the system service is unavailable.
The measure of whether the system is highly available is that when one or more servers are down, the system as a whole and services are still available.
For example, some well-known websites guarantee more than 4 9s of availability, that is, more than 99.99% of usability. That 0.01% is the so-called percentage of downtime. For example, e-commerce websites have likes, and the unavailability of services will cause businesses to lose money and users. Then on the basis of improving availability, there will be compensation for system downtime and unavailability of services.
For example, for ordering service, multiple ordering service instances with load balancer can be used instead of a single ordering service instance, even if redundancy is used to improve reliability.
In a word, load balancing (Load Balance) is one of the factors that must be considered in the design of distributed system architecture. Generally, the problems of high traffic, high concurrency and high availability of distributed systems are solved by means of load balancing and redundancy of the same service instance. The key to load balancing is whether it is evenly distributed.
Common cases of load balancing
Scenario 1: in the micro-service architecture, the gateway is routed to the specific service instance hello:
-two identical service instances hello service, one port 8000 and the other port 8082
-evenly distribute requests to two hello service instances through Kong's load balancing LB function
-Kong has many load balancing algorithms: default weighted-round-robin algorithm, and consumer: consumer id as input value of hash algorithm, etc.
Scenario 2: in the microservice architecture, A service invokes the cluster of B service. Passed the Ribbon client load balancing component:
-load balancing strategy algorithm is not advanced, the simplest is random selection and round robin
Solution of Internet distributed system
Common Internet distributed system architectures are divided into several layers, generally as follows:
Client layer: such as user browser, APP side
Reverse proxy layer: technology selection, Nignx or F5, etc.
Web layer: when the front and rear ends are separated, NodeJS, RN and Vue can be used on the Web side.
Business service layer: using Java, Go, general Internet companies, the technical solution selection is SC or Spring Boot + Dubbo service.
Data storage layer: DB selection MySQL, Cache selection Redis, search selection ES, etc.
Layer-by-layer access to a request from layer 1 to layer 4 requires load balancing. That is, when each upstream invokes multiple business parties downstream, it needs to be called evenly. In this way, from the point of view of the overall system, it is more load balancing.
Layer 1: load balancing of client layer-> reverse proxy layer
How to achieve load balancing in client layer-> reverse proxy layer?
The answer is: DNS polling. DNS can set multiple IP addresses via A (Address, which returns the IP address pointed to by the domain name). For example, the DNS that accesses bysocket.com is configured with ip1 and ip2. For the high availability of the reverse proxy layer, there will be at least two A records. Such redundant two ip corresponding nginx service instances prevent a single point of failure.
Each time the bysocket.com domain name is requested, the corresponding ip address is returned through DNS polling, and the service instance of the reverse proxy layer corresponding to each ip, that is, the public network ip of nginx. In this way, the request allocation for each reverse proxy layer instance is balanced.
Layer 2: reverse proxy layer-> Web layer load balancing
How to achieve load balancing in the reverse proxy layer-> Web layer?
It is handled by the load balancing module of the reverse proxy layer. For example, nginx has a variety of equalization methods:
Request polling. Requests are assigned to the web layer services one by one in chronological order, and then go round and round. If the web layer service down is dropped, it will be deleted automatically.
Upstream web-server {server ip3; server ip4;}
Ip hash. According to the hash value of ip, determine the route to the corresponding web layer. As long as the user's ip is uniform, the request to the Web layer is also uniform.
Another benefit is that requests from the same ip are distributed to the same web-tier services. In this way, each user regularly accesses a web layer service, which can solve the problem of session.
Upstream web-server {ip_hash; server ip3; server ip4;}
Weight weight, fair, url_hash, etc.
Layer 3: Web layer-> load balancing of business service layer
How to realize the load balancing in the Web layer-> business service layer?
For example, Dubbo is a service governance solution, including service registration, service degradation, access control, dynamic configuration of routing rules, weight adjustment, and load balancing. One of the features is intelligent load balancing: built-in a variety of load balancing strategies, intelligently perceive the health status of downstream nodes, significantly reduce call latency and improve system throughput.
In order to avoid single point of failure and horizontal expansion of support services, a service usually deploys multiple instances, that is, Dubbo cluster deployment. Multiple service instances will be turned into a service provider, and then one of the 20 Provider will be randomly selected to invoke according to the configured random load balancing policy, assuming that the seventh Provider is randomly reached. The LoadBalance component uses an equalization strategy from the list of provider addresses to select one provider to make the call, and if the call fails, another one.
Four load balancing strategies are built into Dubbo:
RandomLoadBalance: random load balancing. Choose one at random. Is the default load balancing policy for Dubbo.
RoundRobinLoadBalance: polling load balancer. Poll to select one.
LeastActiveLoadBalance: minimum number of active calls, random with the same number of active calls. Active count refers to the difference in count before and after the call. Causes the slower Provider to receive fewer requests, because the slower the Provider, the greater the difference in count before and after the call.
ConsistentHashLoadBalance: consistent hash load balancing. Requests for the same parameters always fall on the same machine.
Similarly, because of the needs of the business, you can also implement your own load balancing strategy
Layer 4: business service layer-> load balancing in the data storage layer
The load balancing of the data storage layer is generally implemented through DBProxy. For example, MySQL sub-database sub-table.
When the access to a single database or single table is too large and the amount of data is too large, it is necessary to split two dimensions vertically and horizontally. For example, horizontal slicing rules:
Range, time
Hash model, order according to store ID, etc.
However, with this piece of load, the following problems will arise, which need to be solved:
Distributed transaction
Cross-library join, etc.
There are many product solutions for current situation sub-database and sub-table: Dangdang sharding-jdbc, Ali's Cobar, etc.
This is the end of the "case study of distributed system load balancing". Thank you for your reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.