In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to achieve load balancing in Kubernetes". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to achieve load balancing in Kubernetes.
Manage containers
To understand the load balancing of Kubernetes, you first need to understand how Kubernetes builds containers. Containers are often used to perform a specific service or set of services, so they need to be treated according to the services they provide, rather than as a single instance of the service (that is, a single container). In fact, that's what Kubernetes does.
Put them in Pods
In Kubernetes, pod is a basic functional unit. A pod is a set of containers and the volumes they share (volumes). Containers are usually closely related in terms of function and service. Abstracting pods with the same feature set into a collection is called a service. These services are accessed by application clients based on Kubernetes; these separate pod services, in turn, manage access to the containers that make up them, isolating the client from the container itself.
Manage Pods
Now let's look at some details. Pods is typically created and destroyed by Kubernetes, rather than designed as a persistent entity. Each pod has its own IP address (based on the local address), UID, and port number; newly created pod, whether they are copies of the current pod or the previous pod, is assigned a new UID and IP address. It is possible to communicate between containers within each pod, but not directly with containers in different pod.
Let Kubernetes handle the transaction
Kubernetes uses its own built-in tools to manage previous communications with a single pod. This shows that in general, it is sufficient to rely on Kubernetes's internal control of pods, and you don't have to worry about the creation, deletion, or replication of pods. However, sometimes at least some of the internal elements in applications that require Kubernetes management are visible to the underlying network. When this happens, the scenario must consider what to do with the lack of a permanent IP address.
Pods and node (Nodes)
In many ways, Kubernetes can be thought of as a pod management system, just like a container management system. Most of the infrastructure handles containers at the pod level, not at the container level. From the perspective of Kubernetes internal management, the organization level above pod is equivalent to a node, is a virtual machine, contains management and communication resources, and is the environment in which pod is deployed. Nodes themselves can also be created, destroyed, and replaced / redeployed internally. Whether at the node level or the pod level, their creation, destruction, redeployment, use, and extension are handled by internal processes called controllers (Controller).
Act as the "service" of the dispatcher
Service is the way Kubernetes handles containers and pods at the administrative level. However, as we mentioned above, it also abstracts functionally related or identical pods into services, and Kubernetes is at the service level when other elements in external clients and applications interact with pod. The service has a relatively stable IP address (used internally by Kubernetes). When a program needs to use functionality from a service, it makes a request to the service rather than to a single pod. The service then acts as a dispatcher and assigns a pod to process the request.
Scheduling and load distribution
When you look at this, you may wonder, will load balancing be carried out at the scheduling level? It is true. Kubernetes's service is a bit like a huge pool of devices, sending machines with the same function into a designated area as needed. As part of the scheduling process, it needs to fully consider management availability to avoid resource bottlenecks.
Let kube-proxy perform load balancing
The most basic type of load balancing in Kubernetes is actually load distribution (load distribution), which is easy to implement at the scheduling level. Kubernetes uses two methods of load distribution, both of which are performed through the kube-proxy function, which is responsible for managing the virtual IPs used by the service. The default mode for Kube-proxy is iptables, which supports fairly complex rule-based IP management. In iptables mode, the local method of load distribution is random selection-an incoming request is used to randomly select the pod in a service. The kube-proxy mode of previous versions (and the original default mode) is userspace, which uses circular load distribution, allocates the next available pod on the IP list, and then replaces (or replaces) the list.
Real load balancing: Ingress
We mentioned two methods of load balancing earlier, however, these are not really load balancing. In order to achieve real load balancing, the most popular, flexible and applied method in many fields is Ingress, which operates through a controller in a special Kubernetes pod. The controller includes an Ingress resource-- a set of rules that manage traffic and a daemon to which these rules are applied. The controller has its own built-in load balancing features and has some quite complex functions. You can also make Ingress resources include more complex load balancing rules to meet the load balancing functions and requirements of specific systems or vendors.
Use load balancer as a substitute
In addition to Ingress, you can also replace it with a load balancer type of service. The service uses an external load balancer based on cloud services. Load balancers can only be used with specific cloud service providers such as AWS, Azure, OpenStack, CloudStack, and Google Compute Engine, and the functionality of the balancer depends on the provider. In addition, other load balancing methods can be obtained from service providers and third parties.
Generally speaking, Ingress is recommended.
Currently, Ingress is the preferred load balancing method. Because it is executed inside the Kubernetes as a pod-based controller, access to Kubernetes functions is relatively unrestricted (unlike external load balancers, some of them may not be accessible at the pod level). The configurable rules contained in Ingress resources support very detailed and highly refined load balancing and can be customized according to the functional requirements and operating conditions of the application.
At this point, I believe you have a deeper understanding of "how to achieve load balancing in Kubernetes". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.