In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The following brings you an introduction to zero basics | Kubernetes service discovery and load balancing, hoping to give you some help in practical application. Load balancing involves a lot of things, not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.
I. sources of demand
Why Service Discovery is needed
Applications are deployed through pod in K8s cluster. Unlike traditional application deployment, traditional applications are deployed on a given machine, and we know how to call the IP address of other machines. But in the K8s cluster, the application is deployed through pod, and the pod life cycle is short. During the life cycle of pod, such as its creation or destruction, its IP address will change, so that the traditional deployment method cannot be used and the IP cannot be specified to access the specified application.
In addition, in the application deployment of K8s, although you have learned the application deployment model of deployment, you still need to create a pod group, and then these pod groups need to provide a unified access entry and how to control traffic and load balancing into this group. For example, the test environment, pre-release environment and online environment actually need to maintain the same deployment template and access method during the deployment process. Because in this way, you can use the template of the same set of applications to publish directly in different environments.
Service Discovery and load balancing in Service:Kubernetes
Finally, the application service needs to be exposed to external access and needs to be provided to external users to invoke. We learned in the last section that the pod network and the machine are not the same segment of the network, so how to expose the pod network to external access? Service discovery is needed at this point.
Cdn.com/c3527340dc9c8026f74803b6505dfa2a581b8684.png ">
In K8s, service discovery and load balancing is K8s Service. The picture above is the architecture of Service in K8s. K8s Service provides access to external networks and pod networks upwards, that is, external networks can be accessed through service and pod networks can also be accessed through K8s Service.
Downwards, K8s docks another group of pod, that is, it can load balance to a group of pod through K8s Service, which is equivalent to solving the recurrence problem mentioned earlier, or providing a unified access entry for service discovery, and then providing external network access to solve the access between different pod, providing a unified access address.
Second, interpretation of use cases
Let's do an actual use case interpretation to see how to declare and use the service of pod K8s.
Service syntax
First of all, let's take a look at a syntax of K8s Service. The figure above is actually a declaration structure of K8s. There is a lot of syntax in this structure, which is similar to some of the standard objects of K8s introduced earlier. For example, tag label to make some choices, selector to make some choices, label to declare some of its label tags, and so on.
Here is a new knowledge point, which defines a protocol and port for K8s Service service discovery. Moving on to this template, it declares a K8s Service named my-service, which has a label for app:my-service, and it chooses app:MyApp, a pod for label, as its backend.
The last part is the defined service discovery protocol and port. In this example, we define the TCP protocol. The port is 80 and the destination port is 9376. The effect is that access to the service port 80 will be routed to the backend targetPort, that is, as long as the service 80 port is accessed, the load will be balanced to port 9376 of the label pod such as the back-end app:MyApp.
Create and view Service
How do you create the service object you just declared, and what does it look like after it is created? With a simple command:
Kubectl apply-f service.yaml
Or
Kubectl created-f service.yaml
The above command can simply create such a service. After you have created it, you can use:
Kubectl discribe service
Go to see a result after the creation of service.
After service is created, you can see that its name is my-service. Namespace, Label and Selector are all the same as we stated before. After the declaration here, an IP address will be generated. This IP address is the IP address of service. This IP address can be accessed by other pod in the cluster, which is equivalent to providing a unified access entry for pod and service discovery through this IP address.
There is also an attribute of Endpoints, that is, we can see through Endpoints: which pod is selected through the selector declared earlier? And what is the state of these pod? For example, through selector, we see that it selects an IP of these pod and a port of targetPort declared by these pod.
The actual architecture is shown in the figure above. After the service is created, it creates a virtual IP address and port in the cluster, and all pod and node in the cluster can access the service through such an IP address and port. This service will mount the pod of its choice and its IP address to the backend. In this way, when accessed through the IP address of service, the load can be balanced to the backend pod.
When there is a change in the life cycle of the pod, such as the destruction of one of the pod, service automatically removes the pod from the backend. This is achieved: even if the lifecycle of pod changes, the endpoints it accesses will not change.
Access Service within the cluster
In the cluster, how can other pod access the service we created? There are three ways:
First of all, we can access it through service's virtual IP, such as the newly created my-service service. Through kubectl get svc or kubectl discribe service, you can see that its virtual IP address is 172.29.3.27 and port 80, and then you can directly access the service address in pod through this virtual IP and port.
The second way is to access the service name directly, depending on DNS resolution, that is, the pod in the same namespace can directly access the service just declared through the name of service. In different namespace, we can add "." to the name of the service, and then add the namespace where the service is located to access the service. For example, if we directly use curl to access the service, that is, my-service:80 can access the service.
The third is through environment variable access. When pod in the same namespace is started, K8s will put some IP addresses, ports, and some simple configurations of service into the pod of K8s in the way of environment variables. After the container of K8s pod starts, by reading the environment variable of the system than reading an address configured by other service in namespace, or its port number, and so on. For example, in a pod in a cluster, you can get the value of an environment variable directly through curl $. For example, getting MY_SERVICE_SERVICE_HOST is its IP address, and MY_SERVICE is the MY_SERVICE,SERVICE_PORT we just declared is its port number. In this way, you can also request the service of MY_SERVICE in the cluster. Headless Service
A special form of service is Headless Service. When service is created, you can specify clusterIP:None, tell K8s that I don't need clusterIP (that is, a virtual IP in the cluster just mentioned), and then K8s will not assign a virtual IP address to this service. How can it achieve load balancing and unified access without a virtual IP address?
It works like this: pod can resolve the IP addresses of all backend pod by DNS directly through service_name, and resolve to all backend Pod addresses through the A record of DNS. The client chooses a backend IP address. This A record will change with the life cycle of pod, and the list of A records returned will also change. This requires the client application to return all the DNS from the A record to the IP address in the list of A record, and the client chooses an appropriate address to access the pod.
You can see the difference from the template we just declared from the above figure by adding a clusterIP:None in the middle, indicating that there is no need for a virtual IP. The actual effect is that when the pod of the cluster accesses the my-service, it will directly resolve the IP address of all the service corresponding to the pod and return it to pod, and then select an IP address in the pod to directly access it.
Expose Service to the outside of the cluster
The above introduction is in the cluster node or pod to visit service,service how to expose to the outside? How to actually expose the application to the public network for access? There are also two types of service to solve this problem, one is NodePort and the other is LoadBalancer.
The way of NodePort is to expose a port on the node on the node of the cluster (that is, the host of the node in the cluster). This means that after accessing a port on the node, you will make a layer of forwarding and forward it to the virtual IP address, that is, the service virtual IP address on the host. The LoadBalancer type is another layer of conversion on top of the NodePort. The NodePort just mentioned is actually a port on each node in the cluster, and LoadBalancer hangs another load balancer in front of all the nodes. For example, if you hang a SLB on Ali Cloud, the load balancer will provide a unified entrance and load balance all the traffic it touches to the node pod of each cluster node. Then the node pod is converted into ClusterIP to access the actual pod. Third, operation demonstration
The following is a practical demonstration to experience how to use K8s Service on Aliyun's container service.
Create Service
We have created an Ali cloud container cluster and configured a connection from the local terminal to the Ali cloud container cluster.
First of all, through kubectl get cs, you can see that we have connected to the cluster of Ali Cloud CCS normally.
Today we will use these templates to actually experience the use of K8s Service on the Ali Cloud service. There are three templates, the first is client, which is used to simulate the service that accesses K8s through service, and then the load balancer is declared in our service.
Above K8s Service, as described just now, we have created a K8s Service template in which the pod,K8s Service will load balance to port 80 of the backend pod through the port 80 specified by the frontend, and then selector selects some pod of tags such as run:nginx as its backend.
Then create a set of pod with such a tag. How can you create a pod? This is the K8s deployment introduced earlier. Through deployment, we can easily create a set of pod, and then declare that run:nginx is a label, and it has two copies, and it will run out of two pod at the same time.
First create a set of pod, that is, create the K8s deployment, through kubectl create-f service.yaml. The deployment has also been created, so let's see if the pod has been created. The following figure shows that the two pod created by this deployment are already in running. You can see the IP address through kubectl get pod-o wide. Filter through-l, that is, label, run=nginx. As you can see in the figure below, these two pod are IP addresses of 10.0.0.135 and 10.0.0.12, respectively, and both have the label of run=nginx.
Let's create K8s service, which is just introduced to select these two pod through service. This service has been created.
According to the introduction just now, you can see the actual state of the service through kubectl describe svc. As shown in the figure below, the selector of the nginx service just created is run=nginx, and the pod address to the backend is selected through the selector run=nginx, which is the address of the two pod you just saw: 10.0.0.12 and 10.0.0.135. Here you can see that K8s has generated a virtual IP address in the cluster for it. Through this virtual IP address, it can load balance to the next two pod.
Now create a client-side pod to actually feel how to access the K8s Service. We can use client.yaml to create the client-side pod,kubectl get pod and see that the client-side pod has been created and is already running.
Enter the pod through kubectl exec, enter the pod to feel the three access methods just mentioned. First of all, you can directly access the ClusterIP generated by K8s for it, that is, the virtual IP address, and access the IP address through curl. There is no curl in this pod. Through the IP address of wget, enter it for testing. You can see that through this access to the actual IP address can be accessed to the back-end nginx, this virtual is a unified entry.
In the second way, the service can be accessed by directly service name. Also through wget, visit the service name nginx we just created, and we can find that the result is the same as what we just saw.
When you use different namespace, you can also access the service by adding a name of namespace. For example, the namespace here is default.
Finally, the access method we introduce can also be accessed through environment variables. In this pod, we can directly take a look at the actual injected environment variables by executing the env command. Take a look at the various configurations of nginx's service that have been registered.
You can also access such an environment variable through wget, and then access one of our service.
After introducing these three access methods, let's take a look at how to access them through a network outside the service. Let's vim directly modify some of the service we just created.
Finally, we add a type, which is LoadBalancer, which is the external access method we introduced earlier.
Then through kubectl apply, the content you just modified will take effect directly in the service you created.
Now take a look at what will happen to service? Through kubectl get svc-o wide, we found that the newly created nginx service has an EXTERNAL-IP, that is, an external access IP address. What we visited just now is CLUSTER-IP, which is a virtual IP address in the cluster.
Then actually visit the external IP address 39.98.21.187 to feel how to expose our application service through service. Click on the terminal directly. Here we can see that we can access the service directly through the external access endpoint of this application. Is it very simple?
Finally, let's take a look at the service discovery of K8s with service, that is, the access address of service has nothing to do with the life cycle of pod. Let's take a look at the current service and choose these two pod IP addresses.
We now delete one of the pod and delete the previous pod by kubectl delete.
We know that deployment will let it automatically generate a new pod, and now the IP address has changed to 137.
Now go to describe the service just now. As shown in the figure below, we can see that the IP address of the cluster that is the access endpoint has not changed, and the IP address of the external LoadBalancer has not changed either. In all cases that do not affect the access of the client, a pod IP of the backend has been automatically placed in the service backend.
This means that you don't have to worry about a change in the life cycle of pod when the application's components are called.
These are all the demos.
IV. Architecture design
Finally, it is a simple analysis of the design of K8s and some principles of implementation.
Kubernetes Service Discovery Architecture
As shown in the figure above, K8s service discovery and K8s Service are such an overall architecture.
K8s is divided into master node and worker node:
Master is mainly about K8s management and control; worker node is a place where users actually run applications.
There is APIServer in the K8s master node, where all objects in K8s are managed uniformly, and all components will register with APIServer to listen for changes in this object, such as changes in the pod life cycle of our components just now, these events.
The three most critical components are:
One is Cloud Controller Manager, which is responsible for configuring a load balancer of LoadBalancer for external access; the other is Coredns, which uses Coredns to observe a change of service backend pod in APIServer, to configure DNS parsing of service, and to achieve direct access to virtual IP of service through the name of service, or parsing of IP list in Service of Headless type. Then there is a kube-proxy component in each node, which listens for service and pod changes, and then actually configures the node pod in the cluster or an access to the virtual IP address.
What does the actual access link look like? For example, accessing Service from a Client Pod3 within the cluster is similar to the effect just demonstrated. Client Pod3 first parses the service IP corresponding to the ServiceName returned by the ServiceIP,Coredns through the Coredns here, and the Client Pod3 will use the ServiceIP as the request. After the request is sent to the host network, it will be intercepted by the iptables or IPVS configured by the kube-proxy, and then load balancing will be carried out on each actual backend pod, thus realizing a load balancing and service discovery.
For external traffic, such as a request for access through the public network just now. After listening to the changes of service through an external load balancer Cloud Controller Manager, it configures a load balancer and then forwards it to a NodePort on the node. The NodePort also converts the traffic of NodePort into ClusterIP through an iptables configured by kube-proxy, and then converts it to the IP address of a pod at the back end to do load balancing and service discovery. This is the whole K8s service discovery and the overall structure of K8s Service.
Follow-up progress
In the following advanced part, we will explain the implementation principle of K8s Service in more depth, and how to diagnose and repair after the problem of service network.
This paper summarizes
This is the end of the main content of this article, here is a brief summary for you:
Why do cloud native scenarios require service discovery and load balancing, and how to use Kubernetes's Service in Kubernetes to do service discovery and load balancing? the components and implementation principles involved in Service in Kubernetes clusters
Read the above about zero basics | Kubernetes service discovery and load balancing. If there is anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineers for answers. Technical engineers have more than ten years of experience in the industry.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.