In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail the infrastructure and usage scenarios of load balancing in cloud computing. The content of the article is of high quality, so the editor will share it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.
Infrastructure construction
Load balancer uses cluster deployment to achieve session synchronization to eliminate a single point of failure of the server, enhance redundancy and ensure the stability of the service. Aliyun currently provides load balancing services at layer 4 (TCP protocol and UDP protocol) and layer 7 (HTTP and HTTPS protocol).
The fourth layer uses open source software LVS (Linux Virtual Server) + keepalived to achieve load balancing and customizes it according to the needs of cloud computing.
The seventh layer uses Tengine to achieve load balancing. Tengine is a Web server project initiated by Taobao. On the basis of Nginx, it adds a lot of advanced functions and features to the needs of websites with large visits.
As shown in the following figure, layer-4 load balancing in each region is actually run by multiple LVS machines deployed into a LVS cluster. The adoption of cluster deployment model greatly ensures the availability, stability and scalability of load balancing service in abnormal cases.
Each LVS in the LVS cluster carries out conversations and synchronizes the multicast messages to other LVS machines in the cluster, thus realizing the session synchronization among the machines in the LVS cluster. As shown in the following figure, when the client transmits three packets to the server, session An established on LVS1 begins to synchronize to other LVS machines. The solid line in the figure represents the existing connection, and the dotted line in the figure indicates that when the LVS1 fails or is maintained, this part of the traffic will go to a functioning machine LVS2. Therefore, the load balancing cluster supports hot upgrade, and is most transparent to users in case of machine failure and cluster maintenance, and does not affect user business.
Note: if the connection is not established (the three-way handshake is not completed), or the connection has been established but the session synchronization mechanism has not been triggered, the hot upgrade does not guarantee that the connection will not be broken and needs to rely on the client to re-initiate the connection.
Working with scen
Load balancer is mainly used in the following scenarios:
Scenario 1: applied to businesses with high traffic
If your application has high traffic, you can distribute traffic to different ECS instances by configuring listening rules. In addition, you can use the session persistence feature to forward requests from the same client to the same backend ECS to improve access efficiency.
Scenario 2: horizontal expansion system
According to the needs of business development, you can expand the service capacity of the application system by adding and removing ECS instances at any time, which is suitable for a variety of Web servers and App servers.
Scenario 3: eliminate a single point of failure
You can add multiple ECS instances under load balancer instances. When a part of the ECS instance fails, the load balancer will automatically shield the failed ECS instance and distribute the request to the running ECS instance to ensure that the application system can still work normally.
Scenario 4: disaster recovery in the same city (disaster tolerance in multiple availability zones)
In order to provide more stable and reliable load balancer service, Ali Cloud load balancer has deployed multiple availability zones in different regions to achieve disaster recovery in the same region. When the main availability zone fails or becomes unavailable, the load balancer still has the ability to switch to another standby availability zone to restore service within a very short period of time (about 30s interruption). When the primary availability zone is restored, the load balancer will also automatically switch to the main availability zone to provide services.
When using cloud load balancer, you can deploy the load balancer instance in a region that supports multiple availability zones to achieve disaster recovery in the same city. In addition, it is recommended that you consider the deployment of back-end servers according to your own application needs. If you add at least one ECS instance to each availability zone, the cloud load balancer service in this deployment mode is the most efficient.
As shown in the following figure, bind ECS instances in different availability zones under cloud load balancer instances. Normally, the user access traffic will be forwarded to the ECS instance in the main availability zone; when the failure occurs in the availability zone A, the user access traffic will be forwarded to the ECS instance in the standby availability zone. This deployment can not only avoid the failure of external services caused by the failure of a single availability zone, but also reduce the delay through the selection of availability zones between different products.
If you adopt the deployment scheme shown in the following figure, that is, bind multiple ECS instances under the main availability zone of the cloud load balancer instance, while there are no ECS instances in the standby availability zone. When the main availability zone fails, it will cause business interruption, because there is no ECS instance in the standby availability zone to receive requests. This deployment is obviously at the expense of high availability to achieve low latency.
Scenario 5: cross-regional disaster recovery
You can deploy cloud load balancer instances in different regions and mount ECS of different availability zones in the corresponding regions. The upper layer uses cloud resolution as an intelligent DNS to resolve domain names to the service addresses of load balancer instances in different regions to achieve global load balancer. When a region is unavailable, pause the corresponding resolution to ensure that all user access will not be affected.
So much for sharing the infrastructure and usage scenarios of load balancing in cloud computing. I hope the above content can be of some help and learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.