Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of load balancing on Kubernetes

2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the example analysis of load balancing on Kubernetes, which has certain reference value. Interested friends can refer to it. I hope you will learn a lot after reading this article. Let's take a look at it.

If your application is aimed at a large number of users and will attract a lot of traffic, then a constant goal must be to efficiently meet the needs of users without making users feel anything like "the server is busy!" The situation. A typical solution to this demand is scale-out deployment so that multiple application containers can serve user requests. However, this technology requires reliable routing capabilities and the ability to effectively distribute traffic among multiple servers. The content shared in this article is to solve the problem of load balancing solution.

Rancher 1.6 is the container orchestration platform for Docker and Kubernetes, which provides feature-rich support for load balancing. In Rancher 1.6, users can provide routing based on HTTP / HTTPS / TCP hostname / path by using an out-of-the-box HAProxy load balancer.

In this article, we will explore how to implement these popular load balancing technologies on Rancher 2.0 platforms that are natively choreographed using Kubernetes.

Rancher 2.0 load balancing function

With Rancher 2.0, users can use the native Kubernetes Ingress feature supported by NGINX Ingress Controller for 7-tier load balancing right out of the box. Because Kubernetes Ingress only supports HTTP and HTTPS protocols, currently, if you are using Ingress support, load balancer is limited to these two protocols.

For the TCP protocol, Rancher 2.0 supports the configuration of a layer 4 TCP load balancer on the cloud where the Kubernetes cluster is deployed. We will also show you how to configure NGINX Ingress Controller for TCP equalization through ConfigMaps.

HTTP/HTTPS load balancing function

In Rancher 1.6, you added port / service rules to configure the HAProxy load balancer to balance the target service. You can also configure routing rules based on hostname / path.

For example, let's take a look at a service that starts two containers on Rancher 1.6. The started container is listening on private port 80.

To balance the external traffic between the two containers, we can create a load balancer for the application, as shown below. Here, we will configure the load balancer to forward all traffic entering port 80 to the container port of the target service, and then Rancher 1.6 places a convenient link to the public endpoint on the load balancer service.

Rancher 2.0 provides a load balancer feature that uses very similar, NGINX Ingress Controller-supported, and Kubernetes Ingress-enabled load balancers. Let's take a look at what we should do below.

Rancher 2.0 Ingress Controller deployment

Ingress is just a rule that the controller component applies to the actual load balancer. The actual load balancer can be run outside the cluster or deployed in the cluster.

With RKE (Rancher Kubernetes installer), Rancher 2.0 allows users to deploy NGINX Ingress Controller and load balancers on configured clusters right out of the box to handle Kubernetes Ingress rules. Note that NGINX Ingress Controller is installed by default on a cluster configured with RKE. Clusters configured by cloud providers such as GKE have their own Ingress Controller to configure load balancers. The scope of this article applies only to NGINX Ingress Controller installed using RKE.

RKE deploys NGINX Ingress Controller as Kubernetes DaemonSet-- so the NGINX instance is deployed on each node in the cluster. NGINX is like an Ingress Controller that listens for Ingress creation throughout the cluster and configures itself as a load balancer that meets Ingress rules. DaemonSet is configured with hostNetwork to expose two ports-port 80 and port 443. For more information about how to deploy NGINX Ingress Controller DaemonSet and deploy configuration options, see here:

Https://rancher.com/docs/rke/v0.1.x/en/config-options/add-ons/ingress-controllers/

If you are a Rancher 1.6 user, deploying Rancher 2.0 Ingress Controller as DaemonSet will bring about some important changes that you need to know.

In Rancher 1.6, you can deploy scalable load balancer services on the stack. Therefore, if you have four hosts in a Cattle environment, you can deploy a load balancer service of size 2 and point to your application on port 80 on both host IP addresses. You can then start another load balancer on the remaining two hosts to re-balance different services over port 80 (because the load balancer uses different host IP addresses).

Rancher 2.0 Ingress Controller is a DaemonSet-- so it is deployed globally on all schedulable nodes to serve the entire Kubernetes cluster. Therefore, when programming Ingress rules, you need to use a unique hostname and path to point to the workload, because the load balancer node IP address and port 80AG443 are common access points for all workloads.

Now let's see how to deploy the above 1.6 example to Rancher 2.0 using Ingress. On Rancher UI, we can navigate to Kubernetes Cluster and Project, and select the [deploy workload / Deploy Workloads] function to deploy the desired mirrored workload under the namespace. Let's set the size of the workload to two copies, as follows:

The following is how workloads are deployed and listed on the workloads tab:

To achieve a balance between the two pod, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and then select the load balancing tab.

Similar to the service / port rules in Rancher 1.6, you can specify rules for the container port of the workload here.

Host-and path-based routing

Rancher 2. 0 allows you to add Ingress rules based on hostname or URL path. According to your rules, NGINX Ingress Controller routes traffic to multiple target workloads. Let's look at how to route traffic to multiple services in the namespace using the same Ingress specification. For example, the following two workloads are deployed in the namespace:

We can use the same hostname but different paths to add Ingress to balance the traffic between the two workloads.

Rancher 2.0 also provides convenient links to workloads in Ingress records. If you configure an external DNS to program DNS records, you can map this hostname to an Kubernetes Ingress address.

The Ingress address is the IP address that Ingress Controller in your cluster assigns to your workload. You can reach the workload by browsing this IP address. Use kubectl to view the controller assignment entry address.

You can use Curl to test whether the hostname / path-based routing rules work properly, as follows:

The following is the Rancher 1.6 configuration specification using hostname / path-based rules, compared with the 2.0 Kubernetes Ingress YAML specification:

HTTPS / Certificate option

The Rancher 2.0 Ingress feature also supports the HTTPS protocol. You can upload certificates and use them when configuring Ingress rules, as follows:

Select a certificate when adding an Ingress rule:

Ingress restriction

Although Rancher 2.0 supports HTTP- / HTTPS- hostname / path-based load balancing, an important difference to highlight is the need to use a unique hostname / path when configuring Ingress for workloads. The reason is that the Ingress feature only allows port 80 to 443 to be used for routing, while load balancers and Ingress Controller can be started globally as DaemonSet.

Starting with the latest version of Rancher 2.x, Kubernetes Ingress does not support the TCP protocol, but we will discuss solutions for using NGINX Ingress Controller in the next section.

TCP load balancing options

Four-layer load balancer

For the TCP protocol, Rancher 2.0 supports the configuration of a four-tier load balancer in a cloud provider that deploys an Kubernetes cluster. After configuring this load balancer device for the cluster, Layer-4 Load Balancer creates a Load Balancer service when Rancher selects the for port-mapping option during workload deployment. This service allows the appropriate cloud provider of Kubernetes to configure load balancer devices. This device then routes external traffic to your application pod. Please note that the above functions require the Kubernetes cloud provider to meet the requirements of the load balancer service, as configured in this document:

Https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/

Once the load balancer is successfully configured, Rancher will provide a link to the common endpoint of your workload in Rancher UI.

Support for NGINX Ingress Controller TCP through ConfigMaps

As mentioned above, Kubernetes Ingress itself does not support the TCP protocol. Therefore, even if TCP is not a limitation of NGINX, it is not possible to configure NGINX Ingress Controller for TCP load balancing through Ingress creation.

However, you can use NGINX's TCP load balancing feature by creating a Kubernetes ConfigMap, which can be found here: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md. You can create a Kuberenetes ConfigMap object to store pod configuration parameters as key-value pairs, separate from pod images. For more details, please see here:

Https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

To configure NGINX to expose services through TCP, you can add or update ConfigMap ingress-nginx in the namespace tcp-services. This namespace also contains NGINX Ingress Controller pod.

The key in the ConfigMap entry should be the TCP port you want to access publicly, and its value should be in the format:. As shown above, I exposed two workloads that exist in the Default namespace. For example, the first entry in the above ConfigMap tells NGINX that I want to expose the myapp workload running on the default namespace on the external port and listen on private port 80 on the external port 6790.

Adding these entries to Configmap automatically updates the NGINX pod to configure these workloads for TCP load balancing. You can execute these pod deployed in the ingress-nginx namespace and see how to configure these TCP ports in the / etc/nginx/nginx.conf file. After the NGINX configuration / etc/nginx/nginx.conf update, you should be able to use the exposed workload If they are not accessible, you may have to use NodePort services to expose TCP ports.

Limitations of Rancher 2.0 load balancing

Cattle provides feature-rich support for load balancers (detailed here: https://rancher.com/docs/rancher/v1.6/en/cattle/adding-load-balancers/#load-balancers). Some of these features are temporarily not equivalent in Rancher 2.0:

SNI is not currently supported in NGINX Ingress Controller.

TCP load balancing requires load balancer devices enabled by cloud providers in the cluster. There is no Ingress support for TCP on Kubernetes.

HTTP / HTTPS routes can only be configured for port 80amp443 through Ingress. In addition, Ingress Controller is deployed globally as a Daemonset, rather than started as an extensible service. In addition, users cannot randomly assign external ports for load balancing. Therefore, users need to ensure that the hostname / path combination they configure is unique to avoid routing conflicts using the same two ports.

Cannot specify port rule priority and sorting.

Rancher 1.6 added support for draining backend connections and drain timeouts. This feature is not currently supported in Rancher 2.0.

Currently, in Rancher 2.0, specifying a custom stickiness policy and a custom load balancer configuration to attach to the default configuration is not supported. Native Kubernetes has some support for this, but it is limited to custom NGINX configurations:

Https://kubernetes.github.io/ingress-nginx/examples/customization/custom-configuration/README/ .

Migrate the load balancer configuration from Docker Compose to Kubernetes YAML?

Rancher 1.6 provides load balancer support by starting its own microservice, which is started and configured with HAProxy. The load balancer configuration added by the user is specified in the rancher-compose.yml file instead of the standard docker-compose.yml. The Kompose tool applies to standard docker-compose parameters, but in the case of this article, it is not possible to parse the Rancher load balancer configuration structure. So far, we have not been able to use the Kompose tool to convert the load balancer configuration from Docker Compose to Kubernetes YAML.

Thank you for reading this article carefully. I hope the article "sample Analysis of load balancing on Kubernetes" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report