Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Service explanation of Kubernetes Controller (7)

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

First, background introduction

We are here to prepare three machines, one master, two node, using kubeadm to install, the installation process you can refer to my previous blog post.

IP role version 192.168.1.200masterkubeadm v1.13.0192.168.1.201node01kubeadm v1.13.0192.168.1.202node02kubeadm v1.13.0

We should not expect Kubernetes Pod to be robust, but rather assume that containers in Pod are likely to die from failures for a variety of reasons. Controller such as Deployment will dynamically create and destroy Pod to ensure the robustness of the application as a whole. In other words, Pod is fragile, but the application is robust.

Each Pod has its own IP address. When controller replaces the failed Pod with a new Pod, the new Pod is assigned a new IP address. This creates a problem:

If a group of Pod provides services (such as HTTP) and their IP is likely to change, how can the client find and access the service?

The solution given by Kubernetes is Service.

Second, create a Service

Kubernetes Service logically represents a set of Pod, specifically which Pod is selected by label. Service has its own IP, and the IP is unchanged. The client only needs to access the IP,Kubernetes of Service and is responsible for establishing and maintaining the mapping relationship between Service and Pod. No matter how the back-end Pod changes, it will not have any impact on the client, because the Service has not changed.

1. Create a Deployment

Create a file mytest-deploy.yaml file and add the following:

ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: mytestspec: replicas: 3 template: metadata: labels: run: mytestspec: containers:-name: mytest image: wangzan18/mytest:v1 ports:-containerPort: 80

Create our Pod.

[root@master ~] # kubectl apply-f mytest-deploy.yaml deployment.extensions/mytest created [root@master ~] # [root@master ~] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmytest-88d46bf99-cd4zk 1 Running 0 70s 10.244.2.2 node02 mytest-88d46bf99-fsmcj 1 Running 0 70s 10.244.1.3 node01 mytest-88d46bf99-ntd5n 1/1 Running 0 70s 10.244.1.2 node01

Pod assigns its own IP, and these IP can only be accessed by containers and nodes in the Kubernetes Cluster.

2. Create Service

Create the file mytest-svc.yaml and add the following:

ApiVersion: v1kind: Servicemetadata: name: mytest-svcspec: selector: run: mytest ports:-port: 80 targetPort: 8080

Create a service.

[root@master ~] # kubectl apply-f mytest-svc.yaml service/mytest-svc created [root@master ~] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 33mmytest-svc ClusterIP 10.100.77.149 80/TCP 8s

Mytest-svc is assigned to a CLUSTER-IP 10.100.77.149. The back-end mytest Pod can be accessed through this IP.

[root@master ~] # curl 10.100.77.149Hello Kubernetes bootcamp! | Running on: mytest-88d46bf99-ntd5n | vroom1

You can view the correspondence between mytest-svc and Pod through kubectl describe.

[root@master ~] # kubectl describe svc mytest-svcName: mytest-svcNamespace: defaultLabels: Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion": "v1", "kind": "Service", "metadata": {"annotations": {}, "name": "mytest-svc", "namespace": "default"}, "spec": {"ports": [{"port": 80 "t...Selector: run=mytestType: ClusterIPIP: 10.100.77.149Port: 80/TCPTargetPort: 8080/TCPEndpoints: 10.244.1.2 t...Selector 80Magol 10.244.1.3Vera 8080Magna 10.244.2.2Rich 8080Session Affinity: NoneEvents:

Endpoints lists three IP and ports for Pod. We know that Pod's IP is configured in the container, so where is Service's Cluster IP configured? How does CLUSTER-IP map to Pod IP?

3. The underlying implementation of Cluster IP. 1. Enable iptables

Service Cluster IP is a virtual IP that is managed by iptables rules on the Kubernetes node.

You can print out the iptables rules of the current node through the iptables-save command. Because there are so many outputs, only the information related to httpd-svc Cluster IP 10.100.77.149 is intercepted:

[root@master] # iptables-save | grep 10.100.77.149 dport A KUBE-SERVICES!-s 10.244.0.0KUBE-MARK-MASQ-A KUBE-SERVICES 16-d 10.100.77.149 KUBE-MARK-MASQ-A KUBE-SERVICES 16-d 10.100.77.149 KUBE-MARK-MASQ-A KUBE-SERVICES 32-p tcp-m comment-- comment "default/mytest-svc: cluster IP"-m tcp-- dport 80-j KUBE-MARK-MASQ-A KUBE-SERVICES-d 10.100.77.149 default/mytest-svc 32-p tcp-m comment-- comment "default/mytest-svc"-m tcp-- dport 80-j KUBE-SVC-XKNZ3BN47GCYFIPJ

The meanings of these two rules are:

If the Pod within the Cluster (the source address is from 10.244.0.0Universe 16) wants to access the mytest-svc, it is allowed. Other source addresses access the mytest-svc and jump to the regular KUBE-SVC-XKNZ3BN47GCYFIPJ.

Let's take a look at what the KUBE-SVC-XKNZ3BN47GCYFIPJ rules are, as follows:

-A KUBE-SVC-XKNZ3BN47GCYFIPJ-m statistic-- mode random-- probability 0.33332999982-j KUBE-SEP-6VUP2B3YLPPLYJJV-A KUBE-SVC-XKNZ3BN47GCYFIPJ-m statistic-- mode random-- probability 0.50000000000-j KUBE-SEP-ENVKJLELDEHDNVGK-A KUBE-SVC-XKNZ3BN47GCYFIPJ-j KUBE-SEP-IZPSUB6K7QCCEPS31/3 probability jumps to regular KUBE-SEP-6VUP2B3YLPPLYJJV. The probability of 1apace 3 (half of the remaining 2max 3) jumps to the regular KUBE-SEP-ENVKJLELDEHDNVGK. The probability of 1apace 3 jumps to the regular KUBE-SEP-IZPSUB6K7QCCEPS3.

The above three rules are as follows:

Forward to Pod 10.244.1.2.

-A KUBE-SEP-6VUP2B3YLPPLYJJV-s 10.244.1.2 Plus 32-j KUBE-MARK-MASQ-A KUBE-SEP-6VUP2B3YLPPLYJJV-p tcp-m tcp-j DNAT-- to-destination 10.244.1.2 tcp 8080

Forward to Pod 10.244.1.3.

-A KUBE-SEP-ENVKJLELDEHDNVGK-s 10.244.1.3 tcp 32-j KUBE-MARK-MASQ-A KUBE-SEP-ENVKJLELDEHDNVGK-p tcp-m tcp-j DNAT-- to-destination 10.244.1.3 tcp 8080

Forward to Pod 10.244.2.2.

-A KUBE-SEP-IZPSUB6K7QCCEPS3-s 10.244.2.2 Plus 32-j KUBE-MARK-MASQ-A KUBE-SEP-IZPSUB6K7QCCEPS3-p tcp-m tcp-j DNAT-- to-destination 10.244.2.2 tcp 8080

You can see that the speech request is forwarded to the three Pod at the back end. From this, we can see that iptables forwards the traffic accessing the Service to the backend Pod and uses a load balancing policy similar to polling.

Each node of the Cluster is configured with the same iptables rules, which ensures that the entire Cluster can access the Service through Service's Cluster IP.

Turn on ipvs

Iptables is used by default. If kube-proxy uses ipvs for forwarding, we need to enable ipvs. For more information, you can check my previous deployment blog post.

The rules for enabling ipvs are as follows, which looks more intuitive. It is recommended that you enable ipvs.

[root@master] # ipvsadm-lnIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 172.17.0.1 size=4096 32501 rr-> 10.244.1.4 lnIP Virtual Server version 8080 Masq 100-> 10.244.1.5 8080 Masq 100- > 10.244.2.4 rr 8080 Masq 100 TCP 192.168.1.200 rr 32501-> 10.244.1.4 rr 8080 Masq 100-> 10.244.1.5 Masq 8080 Masq 100-> 10.244.2.4 rr 8080 Masq 100 TCP 10.96.0.1 Masq 443 rr-> 192.168.1.200 Masq 6443 TCP 10.96.0.10 TCP 53 rr-> 10.244.0.4 TCP 53 Masq 100-> 10.244.0.5: 53 Masq 100 TCP 10.99.143.93 rr 80 rr-> 10.244.1.4 rr 8080 Masq 100-> 10.244.1.5 rr 8080 Masq 100-> 10.244.2.4 8080 Masq 100 TCP 10.244.0.0 Masq 32501 rr-> 10.244.1.4 Masq 8080 100-> 10.244.1.5 Masq 8080 Masq 100-> 10.244.2.4 Masq 100 0 TCP 10.244.0.1 Masq 32501 rr-> 10.244.1.4 Masq 8080 100-> 10.244.1.5 Masq 8080 Masq 100-> 10.244.2.4 Masq 100 0 TCP 127.0.0. 1Rue 32501 rr-> 10.244.1.4 Masq 8080 Masq 100-> 10.244.1.5 Masq 8080 Masq 100-> 10.244.2.4 Masq 8080 Masq 100 UDP 10.96.0.10 UDP 53 rr-> 10.244 .0.4 Masq 53 Masq 100-> 10.244.0.5 Masq 100 4, DNS accesses Service

In Cluster, in addition to being able to access Service,Kubernetes through Cluster IP, it also provides more convenient DNS access.

The coredns component is installed by default when kubeadm is deployed.

[root@master] # kubectl get deploy-n kube-systemNAME READY UP-TO-DATE AVAILABLE AGEcoredns 2max 22284m

Coredns is a DNS server. Whenever a new Service is created, coredns adds the DNS record for that Service. Pod in Cluster can be passed. Visit Service.

For example, you can use mytest-svc.default to access Service mytest-svc.

[root@master] # kubectl run busybox-- rm-it-- image=busybox / bin/shIf you don't see a command prompt Try pressing enter./ # wget mytest-svc.defaultConnecting to mytest-svc.default (10.100.77.149 index.html 80) index.html 100% | * * | 70 0:00:00 ETA/ #

As shown above, we verified the effectiveness of DNS in a temporary busybox Pod. In addition, since this Pod belongs to default namespace as well as mytest-svc, you can omit default and use mytest-svc to access Service directly.

5. Access Service through public network

In addition to being able to access Service within Cluster, in many cases we also want the Service of the application to be exposed to the outside of Cluster. Kubernetes provides several types of Service, and the default is ClusterIP.

ClusterIP

Service provides services through the IP inside the Cluster, and only nodes and Pod within the Cluster can access it. This is the default Service type. The Service in the previous experiment is all ClusterIP.

NodePort

Service provides services through the static port of the Cluster node. Cluster can be accessed externally through: Service.

LoadBalancer

Service uses cloud provider's unique load balancer to provide services, and cloud provider is responsible for directing load balancer traffic to Service. Currently, cloud provider supports GCP, AWS, Azur and so on.

Let's practice modifying the configuration file of NodePort,Service mytest-svc.yaml as follows:

ApiVersion: v1kind: Servicemetadata: name: mytest-svcspec: type: NodePort selector: run: mytest ports:-port: 80 targetPort: 8080

Add type: NodePort and recreate the mytest-svc.

[root@master ~] # kubectl apply-f mytest-svc.yaml service/mytest-svc configured [root@master ~] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 98mmytest-svc NodePort 10.100.77.149 80:31298/TCP 65m

Kubernetes still assigns a ClusterIP to mytest-svc, except that PORT (S) is 80 ClusterIP 31298. 80 is the port for ClusterIP listening, and 31298 is the listening port on the node. We can use the node's IP:PORT to access the Pod. Kubernetes allocates an available port from 30000-32767, and each node listens on this port and forwards the request to Service.

[root@master ~] # curl 192.168.1.200:31298Hello Kubernetes bootcamp! | Running on: mytest-88d46bf99-cd4zk | root@master 1 [root@master ~] # curl 192.168.1.201:31298Hello Kubernetes bootcamp! | Running on: mytest-88d46bf99-cd4zk | Vroom1 [root@master ~] # curl 192.168.1.202:31298Hello Kubernetes bootcamp! | Running on: mytest-88d46bf99-cd4zk | vroom1

Next, let's delve into a question: how does Kubernetes map: to Pod?

Like ClusterIP, it also relies on iptables. Compared with ClusterIP, the following two rules are added to the iptables of each node:

-A KUBE-NODEPORTS-p tcp-m comment-- comment "default/mytest-svc:"-m tcp-- dport 31298-j KUBE-MARK-MASQ-A KUBE-NODEPORTS-p tcp-m comment-- comment "default/mytest-svc:"-m tcp-- dport 31298-j KUBE-SVC-XKNZ3BN47GCYFIPJ

The meaning of the rule is: the request to access port 31298 of the current node applies the rule KUBE-SVC-XKNZ3BN47GCYFIPJ, which is as follows:

-A KUBE-SVC-XKNZ3BN47GCYFIPJ-m statistic-- mode random-- probability 0.33332999982-j KUBE-SEP-6VUP2B3YLPPLYJJV-A KUBE-SVC-XKNZ3BN47GCYFIPJ-m statistic-- mode random-- probability 0.50000000000-j KUBE-SEP-ENVKJLELDEHDNVGK-A KUBE-SVC-XKNZ3BN47GCYFIPJ-j KUBE-SEP-IZPSUB6K7QCCEPS3

Its function is to load balance to each Pod. NodePort is randomly selected by default, but we can use nodePort to specify a specific port.

ApiVersion: v1kind: Servicemetadata: name: mytest-svcspec: type: NodePort selector: run: mytest ports:-port: 80nodePort: 30000 targetPort: 8080nodePort is the listening port on the node. Port is the port on which to listen on ClusterIP. TargetPort is the port on which Pod listens.

Eventually, requests received by Node and ClusterIP on their respective ports are forwarded to Pod's targetPort through iptables.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report