In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Write at the front
Kubernetes involves many concepts, including various technologies in the cloud ecological community, and the learning cost is relatively high. K8s usually completes the deployment of resources by writing yaml files, which is a high threshold for more beginners. This article is a quick start in the form of command line, overlooking the core concepts of kubernetes, and getting started quickly.
1. Basic concept 1.1 clusters and nodes
Kubernetes is an open source container engine management platform that implements automatic deployment of containerized applications, task scheduling, auto scaling, load balancing and other functions. Cluster is composed of master and node roles, in which master is responsible for managing clusters, master nodes are composed of kube-apiserver,kube-controller-manager,kube-scheduler,etcd roles, and node nodes run actual applications, which are composed of Container Runtime,kubelet and kube-proxy, where Container Runtime may be Docker,rke,containerd Node nodes can be composed of physical machines or virtual machines.
1. View master component roles
[root@node-1 ~] # kubectl get componentstatuses NAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
2. View the list of node nodes
[root@node-1 ~] # kubectl get nodesNAME STATUS ROLES AGE VERSIONnode-1 Ready master 26h v1.14.1node-2 Ready 26h v1.14.1node-3 Ready 26h v1.14.1
3. View the details of the node node
[root@node-1 ~] # kubectl describe node node-3Name: node-3Roles: Labels: beta.kubernetes.io/arch=amd64. # tag and Annotations beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=node-3 kubernetes.io/os=linuxAnnotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC": "22:f8:75:bb:da:4e"} flannel.alpha.coreos.com/ Backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 10.254.100.103 kubeadm.alpha.kubernetes.io/cri-socket: / var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes. Io/controller-managed-attach-detach: trueCreationTimestamp: Sat 10 Aug 2019 17:50:00 + 0800Taints: Unschedulable: false. # whether to disable scheduling, the identity bit controlled by the cordon command. Conditions: # Resource scheduling capability, whether there is pressure on MemoryPressure memory (that is, insufficient memory) # DiskPressure disk pressure # PIDPressure disk pressure # Ready, indicates whether the node is working normally Indicates that the resources are sufficient and the related process status is normal Type Status LastHeartbeatTime LastTransitionTime Reason Message -- MemoryPressure False Sun 11 Aug 2019 20:32:07 + 0800 Sat, 10 Aug 2019 17:50:00 + 0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sun, 11 Aug 2019 20:32:07 + 0800 Sat, 10 Aug 2019 17:50:00 + 0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sun, 11 Aug 2019 20:32:07 + 0800 Sat, 10 Aug 2019 17:50:00 + 0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sun, 11 Aug 2019 20:32:07 + 0800 Sat 10 Aug 2019 18:04:20 + 0800 KubeletReady kubelet is posting ready statusAddresses: # address and hostname InternalIP: 10.254.100.103 Hostname: node-3Capacity: # Container Resource capacity cpu: 2 ephemeral-storage: 51473868Ki hugepages-2Mi: 0 memory: 3880524Ki pods: 110Allocatable: # Resource allocation Cpu: 2 ephemeral-storage: 47438316671 hugepages-2Mi: 0 memory: 3778124Ki pods: 110System Info: # system Information Such as kernel version, operating system version, cpu architecture Node node software version Machine ID: 0ea734564f9a4e2881b866b82d679dfc System UUID: D98ECAB1-2D9E-41CC-9A5E-51A44DC5BB97 Boot ID: 6ec81f5b-cb05-4322-b47a-a8e046d9bf79 Kernel Version: 3.10.0-957.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.3.1. # Container Runtime is docker, version 18.3.1 Kubelet Version: v1.14.1 # kubelet version Kube-Proxy Version: v1.14.1 # kube-proxy version PodCIDR: 10.244.2.0 take 24 # pod Network Non-terminated Pods: (4 in total). # the following is the Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE of each pod resource usage- -kube-system coredns-fb8b8dccf-hrqm8 100m (5) 0 (0) 70Mi (1) 170Mi (4) 26h kube-system coredns-fb8b8dccf-qwwks 100m (5) 0 (0) 70Mi (1) 170Mi (4) 26h kube-system kube-flannel-ds-amd64-zzm2g 100m (5) 100m (5) 50Mi (1) 50Mi (1) 26h kube-system kube-proxy-x8zqh 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 26hAllocated resources: # allocated funds Source condition (Total limits may be over 100percent I.E., overcommitted.) Resource Requests Limits-cpu 300m (15) 100m (5) memory 190Mi (5) 390Mi (10) ephemeral-storage 0 (0) 0 (0) Events: 1.2 containers and applications
Kubernetes is a container orchestration engine, which is responsible for container scheduling, management and container operation, but the minimum unit of kubernetes scheduling is not container, but pod,pod can contain multiple container. Usually, pod is not run directly in the cluster, but through various controllers such as Deployments,ReplicaSets,DaemonSets. Why? Because the controller can ensure the consistency of the pod state, as officially described as "make sure the current state match to the desire state", to ensure that the current state is consistent with the expected, simply put, it is an pod exception, and the controller will be rebuilt on other nodes to ensure that the current pod of the cluster is consistent with the expected settings.
Container, container is a lightweight virtualization technology, by encapsulating applications in an image to achieve convenient deployment and application distribution. The smallest scheduling unit in Pod,kubernetes, the wrapper container, contains a pause container and an application container that share the same namespace, network, storage, and shared processes. Deployments, deployment group is also called application, strictly speaking, it is stateless application. Another kind of stateless application is that StatefulSets,Deployments is a kind of controller, which can control the number of replicas of the application replicas. The state of replicas can be controlled through the Deployments Controller in kube-controller-manager. 1.3 Service access
Pod in kubernetes is the carrier of actual operation. If pod is attached to node, node may fail. The controller of kubernetes, such as replicasets, will pull up a new pod on other node, and the new pod will assign a new IP;. Furthermore, application deployment will include multiple copies of replicas, just as an application deployments deploys three copies of pod, and pod is equivalent to back-end Real Server. How to achieve access to these three applications? In this case, we usually add a load balancer Load Balancer,service in front of the Real Server, which is the load balancer scheduler of pod. Service abstracts the dynamic pod as a service. The application can access the service directly, and the service will automatically forward the request to the backend pod. There are two mechanisms responsible for service forwarding rules: iptables and ipvs,iptables achieve load balancing by setting rules such as DNAT, and ipvs sets forwarding rules through ipvsadm.
According to the different access methods of services, service can be divided into the following types: ClusterIP,NodePort,LoadBalancer and _ ExternalName, which can be set through type.
ClusterIP, mutual access within the cluster, combined with DNS to realize service discovery within the cluster; NodePort, which exposes a port of each node node through NAT to achieve external access; and LoadBalancer, which implements the external access interface of cloud vendors, needs to rely on cloud service providers to implement specific technical details, such as Tencent Cloud integration with CLB.
ExternalName, which exposes the service name through the service name, can be implemented by ingress. It forwards external requests to the cluster in the form of domain name forwarding, depending on the specific external implementation, such as nginx,traefik, and the access details of the major cloud computing vendors.
Pod is dynamic, ip address may change (such as node failure), and the number of replicas may change, such as application extension scale up, application lock capacity scale down, etc. How can service identify the dynamic change of pod? The answer is labels. The Endpoints of an application will be automatically filtered through labels, and the Endpoints will be automatically updated when the pod changes. Different applications will be made up of different label. For more information about labels, please refer to https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
two。 Create an application
We began to deploy an application, that is, deployments,kubernetes contains a variety of workload such as stateless Deployments, stateful StatefulSets, daemon DaemonSets, and workload in the United States and China corresponding to different application scenarios. We first take Deployments as an example, and other workload are similar. Generally speaking, applications deployed in kubernetes are deployed as yaml files. For beginners, writing yaml files is too lengthy and not suitable for beginners. We first realize the access of API by kubectl command line.
1. Deploy the nginx application and deploy three copies
[root@node-1] # kubectl run nginx-app-demo-image=nginx:1.7.9-port=80-replicas=3 kubectl run-generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run--generator=run-pod/v1 or kubectl create instead.deployment.apps/nginx-app-demo created
2. Looking at the list of applications, you can see that the current pod status is normal, Ready is the current state, and AVAILABLE is the target state.
[root@node-1 ~] # kubectl get deploymentsNAME READY UP-TO-DATE AVAILABLE AGEnginx-app-demo 3amp 3 3 3 72s
3. Check the details of the application. As shown below, we can know that Deployments controls the number of copies through ReplicaSets, and Replicaset controls the number of pod.
[root@node-1 ~] # kubectl describe deployments nginx-app-demo Name: nginx-app-demo # Application name Namespace: default # Namespace CreationTimestamp: Sun, 11 Aug 2019 21:52:32 + 0800Labels: run=nginx-app-demo # labels, very important Subsequent service accesses Annotations: deployment.kubernetes.io/revision: 1 # scrolling upgrade version number Selector: run=nginx-app-demo # labels selector selectorReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable # replica controller StrategyType: RollingUpdate # upgrade policy is RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25% max unavailable, 25% max surge # RollingUpdate upgrade policy That is, a maximum of 25% of podPod Template: # container application templates, including images and port Labels: run=nginx-app-demo Containers: nginx-app-demo: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Conditions: # current status Type Status Reason-Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailableOldReplicaSets: NewReplicaSet: nginx-app-demo-7bdfd97dcd (3amp 3 replicas created) # ReplicaSets controller name Events: # run event Type Reason Age From Message-Normal ScalingReplicaSet 3m24s deployment-controller Scaled up replica set nginx-app-demo-7bdfd97dcd to 3
4. Check the replicasets situation. We can see that the replicasets replica controller has generated three pod.
1. View the replicasets list [root@node-1 ~] # kubectl get replicasetsNAME DESIRED CURRENT READY AGEnginx-app-demo-7bdfd97dcd 3 3 3 9m9s2. Check replicasets details [root@node-1 ~] # kubectl describe replicasets nginx-app-demo-7bdfd97dcdName: nginx-app-demo-7bdfd97dcdNamespace: defaultSelector: pod-template-hash=7bdfd97dcd,run=nginx-app-demoLabels: pod-template-hash=7bdfd97dcd # labels, added a hash label to identify replicasets run=nginx-app-demoAnnotations: deployment.kubernetes.io/desired-replicas: 3 # rolling upgrade information, copy tree, maximum number Parent control of the copy of the application version deployment.kubernetes.io/max-replicas: 4 deployment.kubernetes.io/revision: 1Controlled By: Deployment/nginx-app-demo # for nginx-app-demo this DeploymentsReplicas: 3 current / 3 desiredPods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 FailedPod Template: # Container template Inherit from deployments Labels: pod-template-hash=7bdfd97dcd run=nginx-app-demo Containers: nginx-app-demo: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: # event log Generated three different pod Type Reason Age From Message-Normal SuccessfulCreate 9m25s replicaset-controller Created pod: nginx-app-demo-7bdfd97dcd-hsrft Normal SuccessfulCreate 9m25s replicaset-controller Created pod: nginx-app-demo-7bdfd97dcd-qtbzd Normal SuccessfulCreate 9m25s replicaset-controller Created pod: nginx-app-demo-7bdfd97dcd-7t72x
5. Check the situation of pod and the carrier of actual application deployment. A container of nginx is deployed and an ip is assigned in pod, through which the application can be accessed directly.
1. View the list of pod Consistent with the name generated by replicasets [root@node-1 ~] # kubectl get podsNAME READY STATUS RESTARTS AGEnginx-app-demo-7bdfd97dcd-7t72x 1 13mnginx-app-demo-7bdfd97dcd-hsrft 1 Running 0 13mnginx-app-demo-7bdfd97dcd-hsrft 1 Running 0 13mnginx-app-demo-7bdfd97dcd-qtbzd 1 Running 0 13m View pod details [root@node-1 ~] # kubectl describe pods nginx-app-demo-7bdfd97dcd-7t72xName: nginx-app-demo-7bdfd97dcd-7t72xNamespace: defaultPriority: 0PriorityClassName: Node: node-3/10.254.100.103Start Time: Sun 11 Aug 2019 21:52:32 + 0800Labels: pod-template-hash=7bdfd97dcd # labels name run=nginx-app-demoAnnotations: Status: RunningIP: 10.244.2.4 # ip address of pod Controlled By: ReplicaSet/nginx-app-demo-7bdfd97dcd # Information that the replica controller is a replicasetsContainers: # container Including container id, mirror, drop button, status Information such as environment variables nginx-app-demo: Container ID: docker://5a0e5560583c5929e9768487cef43b045af4c6d3b7b927d9daf181cb28867766 Image: nginx:1.7.9 Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sun 11 Aug 2019 21:52:40 + 0800 Ready: True Restart Count: 0 Environment: Mounts: / var/run/secrets/kubernetes.io/serviceaccount from default-token-txhkc (ro) Conditions: # Container status condition Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: # Container Volume default-token-txhkc: Type: Secret (a volume populated by a Secret) SecretName: default-token-txhkc Optional: falseQoS Class: BestEffort # QOS type Node-Selectors: # stain type Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300sEvents: # event status Pull mirror image Launch container Type Reason Age From Message-Normal Scheduled 14m default-scheduler Successfully assigned default/nginx-app-demo-7bdfd97dcd-7t72x to node-3 Normal Pulling 14m kubelet, node-3 Pulling image "nginx:1.7.9" Normal Pulled 14m kubelet Node-3 Successfully pulled image "nginx:1.7.9" Normal Created 14m kubelet, node-3 Created container nginx-app-demo Normal Started 14m kubelet, node-3 Started container nginx-app-demo3. Access application
Kubernetes assigns an ip address to each pod, through which the application can be accessed directly, which is equivalent to accessing RS. However, an application is composed of multiple replicas and needs to rely on service to achieve load balancing of the application. Service, we explore the access methods of ClusterIP and NodePort.
3.1 access to Pod IP
1. Set the content of pod. In order to make it easier to distinguish, we set the nginx site content of the three pod to different to observe the effect of load balancing.
Check the pod list [root@node-1 ~] # kubectl get podsNAME READY STATUS RESTARTS AGEnginx-app-demo-7bdfd97dcd-7t72x 1 + 1 Running 0 28mnginx-app-demo-7bdfd97dcd-hsrft 1 + 1 Running 0 28mnginx-app-demo-7bdfd97dcd-qtbzd 1 + + 1 Running 0 28m into the pod container [root@node-1 ~] # kubectl Exec-it nginx-app-demo-7bdfd97dcd-7t72x / bin/bash set site content [root@nginx-app-demo-7bdfd97dcd-7t72x:/# echo "web1" > / usr/share/nginx/html/index.html, and so on set the content of the other two pod to web2 and web3 [root @ nginx-app-demo-7bdfd97dcd-hsrft:/# echo web2 > / usr/share/nginx/html/index.html [root@nginx-app-demo-7bdfd97dcd-qtbzd:/# echo web3 > / usr/share/nginx/html/index.html]
2. Get the ip address of pod. How to quickly obtain the ip address of pod? you can display more content through the-o wide parameter, including the node and ip to which pod belongs.
[root@node-1] # kubectl get pods-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-app-demo-7bdfd97dcd-7t72x 1 Running 0 34m 10.244.2.4 node-3 nginx-app-demo-7bdfd97dcd-hsrft 1 Running 034m 10.244. 1.2 node-2 nginx-app-demo-7bdfd97dcd-qtbzd 1/1 Running 0 34m 10.244.1.3 node-2
3. Visit the ip of pod to view the content of the site. The content of different pod sites is the same as the above steps.
[root@node-1 ~] # curl http://10.244.2.4web1[root@node-1 ~] # curl http://10.244.1.2web2[root@node-1 ~] # curl http://10.244.1.3web33.2 ClusterIP access
Direct access to the application through ip of pod can be achieved for the application of a single pod, but it does not meet the requirements for the application of multiple replicas replicas. You need to use service to achieve load balancing. Service needs to set different type, which defaults to ClusterIP, that is, intra-cluster access. The service is exposed to service through the exposition sub-command as follows.
1. Expose service, where port represents the proxy listening port, target-port represents the container port, and type sets the type of service
[root@node-1] # kubectl expose deployment nginx-app-demo-- name nginx-service-demo\-- port=80\-- protocol=TCP\-- target-port=80\-- type ClusterIP service/nginx-service-demo exposed
2. Looking at the details of service, you can see that service automatically generates ip of pod into endpoints through the labels selector selector.
Check the service list and show that there are two. The service created by kubernetes for the default cluster [root @ node-1 ~] # kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 29hnginx-service-demo ClusterIP 10.102.1.1 80/TCP 2m54s to view the service details, you can see that the Seletor of Labels is the same as the previous Deployments setting. Endpoints forms pod into a list [root@node-1 ~] # kubectl describe services nginx-service-demo Name: nginx-service-demo # name Namespace: default # Namespace Labels: run=nginx-app-demo # tag name Annotations: Selector: run=nginx-app-demo # tag selector Type: ClusterIP # service type is ClusterIPIP: 10.102.1.1 # ip of service That is, vip, a Port: 80/TCP # service port is automatically assigned within the cluster, that is, the external access port of ClusterIP: TargetPort: 80/TCP # container port Endpoints: 10.244.1.280/TCP 80mai 10.244.1.380/TCP 80 # access address list Session Affinity: None # load balancer scheduling algorithm Events:
3. If you access the address of service, you can see that service automatically implements pods load balancing, and the scheduling policy is polling. Why? Because the default scheduling policy of service is None, that is, rotation training, it can be set to ClientIP to achieve session persistence, and requests from the same client IP will be dispatched to the same pod.
[root@node-1 ~] # curl http://10.102.1.1web3[root@node-1 ~] # curl http://10.102.1.1web1[root@node-1 ~] # curl http://10.102.1.1web2[root@node-1 ~] # curl http://10.102.1.1
4. After an in-depth analysis of the principle of ClusterIP, there are two mechanisms for the implementation of service backend: iptables and ipvs. The environment installation uses iptables,iptables to generate access rules through the chain of nat, KUBE-SVC-R5Y5DZHD7Q6DDTFZ is the inbound DNAT forwarding rule, and KUBE-MARK-MASQ is outbound forwarding.
[root@node-1] # iptables-t nat-L-nChain KUBE-SERVICES (2 references) target prot opt source destination KUBE-MARK-MASQ tcp -! 10.244.0.0 default/nginx-service-demo 16 10.102.1.1 / * default/nginx-service-demo: cluster IP * / tcp dpt:80KUBE-SVC-R5Y5DZHD7Q6DDTFZ tcp-0.0.0.0max 0 10. 102.1.1 / * default/nginx-service-demo: cluster IP * / tcp dpt:80 outbound: when the KUBE-MARK-MASQ source address field is not 10.244.0.0and16 accesses the destination port 80 of 10.102.1.1 Forward the request to the KUBE-MARK-MASQ link inbound: KUBE-SVC-R5Y5DZHD7Q6DDTFZ any original address to access destination 10.102.1.1 when the destination port 80 forwards the request to the KUBE-SVC-R5Y5DZHD7Q6DDTFZ chain
5. Check the inbound request rules. Inbound request rules will be mapped to different chains, and different chains will be forwarded to ip of different pod.
1. View inbound rule KUBE-SVC-R5Y5DZHD7Q6DDTFZ The request will be forwarded to three chains [root@node-1 ~] # iptables-t nat-L KUBE-SVC-R5Y5DZHD7Q6DDTFZ-nChain KUBE-SVC-R5Y5DZHD7Q6DDTFZ (1 references) target prot opt source destination KUBE-SEP-DSWLUQNR4UPH24AX all-0.0.0.0Unip 00.0.0.0amp0 statistic mode random probability 0.33332999982KUBE-SEP-56SLMGHHOILJT36K all-- 0.0.0.0/ 0 0. 0. 0. 0 statistic mode random probability 0.50000000000KUBE-SEP-K6G4Z74HQYF6X7SI all 0. 0. 0. 0. 0. 0. 0. 0. View the rules of the three chains actually forwarded Actually mapped to ip addresses of different pod [root@node-1 ~] # iptables-t nat-L KUBE-SEP-DSWLUQNR4UPH24AX-nChain KUBE-SEP-DSWLUQNR4UPH24AX (1 references) target prot opt source destination KUBE-MARK-MASQ all-- 10.244.1.2 0.0.0.0 DNAT tcp-- 0.0.0.0 DNAT tcp 0.0.0.0 0 tcp to:10.244.1.2: 80 [root @ node-1 ~] # iptables-t nat-L KUBE-SEP-56SLMGHHOILJT36K-nChain KUBE-SEP-56SLMGHHOILJT36K (1 references) target prot opt source destination KUBE-MARK-MASQ all-10.244.1.3 0.0.0.0 target prot opt source destination KUBE-MARK-MASQ all 0 DNAT tcp -- 0.0.0.0.0 tcp to:10.244.1.3 0 tcp to:10.244.1.3: 80 [root @ node-1] # iptables-t nat-L KUBE-SEP-K6G4Z74HQYF6X7SI-nChain KUBE-SEP-K6G4Z74HQYF6X7SI (1 references) target prot opt source destination KUBE-MARK-MASQ all-- 10.244.2.4 0.0. 0. Tcp to:10.244.2.4:80 0 DNAT tcp-- 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 NodePort access
Service can only provide access to applications within the cluster through ClusterIP, but cannot access applications directly from outside. If external access is needed, there are several ways to access applications: NodePort,LoadBalancer and Ingress, in which LoadBalancer needs to be implemented by cloud service provider, Ingress needs to be installed with separate Ingress Controller, daily testing can be implemented through NodePort, and NodePort can expose a port of node to external network access.
1. Modify the type of type from ClusterIP to NodePort (or re-create it and specify the type of type as NodePort)
1. Modify the type of type [root@node-1 ~] # kubectl patch services nginx-service-demo-p'{"spec": {"type": "NodePort"} 'service/nginx-service-demo patched2 through patch. Confirm the yaml file configuration and assign a NodePort port That is, each node will listen to the port [root@node-1 ~] # kubectl get services nginx-service-demo-o yamlapiVersion: v1kind: Servicemetadata: creationTimestamp: "2019-08-11T14:35:59Z" labels: run: nginx-app-demo name: nginx-service-demo namespace: default resourceVersion: "157676" selfLink: / api/v1/namespaces/default/services/nginx-service-demo uid: 55e29b78-bc45-11e9-b073-525400490421spec: clusterIP: 10.102.1 1. 1 externalTrafficPolicy: Cluster ports:-nodePort: 32416 # automatically assigns a NodePort port port: 80 protocol: TCP targetPort: 80 selector: run: nginx-app-demo sessionAffinity: None type: NodePort # type modified to NodePortstatus: loadBalancer: {} 3. Looking at the service list, you can see that the type of service has been changed to NodePort, while retaining the access IP of ClusterIP [root @ node-1 ~] # kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 30hnginx-service-demo NodePort 10.102.1.1 80:32416/TCP 68m
2. Access the application through NodePort. The address of each node is equivalent to vip, which can achieve the same load balancing effect. At the same time, the CluserIP function is available.
1. NodePort load balance [root@node-1 ~] # curl http://node-1:32416web1[root@node-1 ~] # curl http://node-2:32416web1[root@node-1 ~] # curl http://node-3:32416web1[root@node-1 ~] # curl http://node-3:32416web3[root@node-1 ~] # curl http://node-3:32416web22. ClusterIP load balance [root@node-1 ~] # curl http://10.102.1.1web2[root@node-1 ~] # curl http://10.102.1.1web1[root@node-1 ~] # curl http://10.102.1.1web1[root@node-1 ~] # curl http://10.102.1.1web3
3. The principle of NodePort forwarding: each node listens to the port of the NodePort through kube-proxy, and the port is forwarded by the iptables at the back end.
1. NodePort listening port [root@node-1 ~] # netstat-antupl | grep 32416tcp6 0 0:: 32416:: * LISTEN 32052/kube-proxy 2. Check the forwarding rules of the nat table. There are two rules: KUBE-MARK-MASQ egress and KUBE-SVC-R5Y5DZHD7Q6DDTFZ inbound. Chain KUBE-NODEPORTS (1 references) target prot opt source destination KUBE-MARK-MASQ tcp-0.0.0.0max 0 0.0.0.0max 0 / * default/nginx-service-demo: * / tcp dpt:32416KUBE-SVC-R5Y5DZHD7Q6DDTFZ tcp-0.0.0.0max 0 0.0.0.0max 0 / * default/nginx-service-demo: * / tcp dpt:324163. Check the inbound request rule chain KUBE-SVC-R5Y5DZHD7Q6DDTFZ [root@node-1 ~] # iptables-t nat-L KUBE-SVC-R5Y5DZHD7Q6DDTFZ-nChain KUBE-SVC-R5Y5DZHD7Q6DDTFZ (2 references) target prot opt source destination KUBE-SEP-DSWLUQNR4UPH24AX all-- 0.0.0.0Unip 0 0.0.0.0amp0 statistic mode random probability 0.33332999982KUBE-SEP-56SLMGHHOILJT36K all-- 0 .0.0.0 / 00.0.0.0amp 0 statistic mode random probability 0.50000000000KUBE-SEP-K6G4Z74HQYF6X7SI all-- 0.0.0.0Universe 0.0.0.0Uniplex 0 4. Continue to view the forwarding chain Contains rules for DNAT forwarding and KUBE-MARK-MASQ and outbound returns [root@node-1 ~] # iptables-t nat-L KUBE-SEP-DSWLUQNR4UPH24AX-nChain KUBE-SEP-DSWLUQNR4UPH24AX (1 references) target prot opt source destination KUBE-MARK-MASQ all-- 10.244.1.2 0.0.0.0 DNAT tcp-- 0.0.0 .0 0 0.0.0.0 tcp to:10.244.1.2: 80 [root @ node-1 ~] # iptables-t nat-L KUBE-SEP-56SLMGHHOILJT36K-nChain KUBE-SEP-56SLMGHHOILJT36K (1 references) target prot opt source destination KUBE-MARK-MASQ all-- 10.244.1.3 0.0.0.0 amp 0 DNAT tcp-0.0.0.0 tcp to:10.244.1.3 0 tcp to:10.244.1.3: 80 [root @ node-1] # iptables-t nat-L KUBE-SEP-K6G4Z74HQYF6X7SI-nChain KUBE-SEP-K6G4Z74HQYF6X7SI (1 references) target prot opt source destination KUBE-MARK-MASQ all-- 10.244.2.4 0. 0. 0. 0 DNAT tcp-0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 tcp to:10.244.2.4:804. Expand the application
When the load of the application is too high to meet the application request, we usually achieve it by expanding the number of RS. In kubernetes, the expansion of RS is actually achieved by expanding the number of copies of replicas. It is very convenient to expand RS and quickly achieve auto scaling. Kubernets can provide scalability in two ways: 1. Manual scalability scale up and scale down,2. Dynamic auto-scaling horizontalpodautoscalers, which is based on the utilization of CPU to achieve automatic auto-scaling, needs to rely on and monitor components such as metrics server, which has not been implemented at present, and will be discussed in depth later. This paper expands the number of application copies by manual scale.
1. Manually expand the number of copies
[root@node-1] # kubectl scale-- replicas=4 deployment nginx-app-demo deployment.extensions/nginx-app-demo scaled
2. Check the replica expansion, and deployments automatically deploys an application
[root@node-1] # kubectl get deploymentsNAME READY UP-TO-DATE AVAILABLE AGEnginx-app-demo 4ram 4 4 4 133m
3. What happens to service at this time? Check the details of service, the newly extended pod will be automatically updated to the endpoints of service, and the service will be automatically discovered.
View service details [root@node-1 ~] # kubectl describe services nginx-service-demoName: nginx-service-demoNamespace: defaultLabels: run=nginx-app-demoAnnotations: Selector: run=nginx-app-demoType: NodePortIP: 10.102.1.1Port: 80/TCPTargetPort: 80/TCPNodePort: 32416/TCPEndpoints: 10.244.1.2 32416/TCPEndpoints: 10.244.1.3 more...# address has been automatically added to Session Affinity: NoneExternal Traffic Policy: ClusterEvents: view endpioints details [root@node-1 ~] # kubectl describe endpoints nginx-service-demoName: nginx-service-demoNamespace: defaultLabels: run=nginx-app-demoAnnotations: endpoints.kubernetes.io/last-change-trigger-time: 2019-08-11T16:04:56ZSubsets: Addresses: 10.244.1.2 NotReadyAddresses 10.244.1.3 NotReadyAddresses: Ports: Name Port Protocol -80 TCPEvents:
4. Test. Set the content of the newly added pod site to web4. Refer to the previous setting method, test the ip of service, and view the load balancing effect.
[root@node-1 ~] # curl http://10.102.1.1web4[root@node-1 ~] # curl http://10.102.1.1web4[root@node-1 ~] # curl http://10.102.1.1web2[root@node-1 ~] # curl http://10.102.1.1web3[root@node-1 ~] # curl http://10.102.1.1web1[root@node-1 ~] # curl http://10 .102.1.1 web2 [root@node-1 ~] # curl http://10.102.1.1web1
It can be seen that auto-scaling will be automatically added to the service to achieve automatic service discovery and load balancing, and the expansion of applications is much faster than traditional applications.
5. Rolling upgrade
When you update an application in kubernetes, you can package the application into a mirror, and then update the image of the application to achieve the upgrade. The default Deployments upgrade policy is RollingUpdate, which updates 25% of the pod in the application each time, and new pod is replaced one by one to prevent the application from becoming unavailable during the upgrade process. At the same time, if the application fails in the upgrade process, the application can be rolled back to the previous state by rollback, and the rollback can be achieved by replicasets.
1. Change the image of nginx, upgrade the application to the latest version, open another window and use kubectl get pods-w to observe the upgrade process.
[root@node-1 ~] # kubectl set image deployments/nginx-app-demo nginx-app-demo=nginx:latestdeployment.extensions/nginx-app-demo image updated
2. Observe the upgrade process. Through viewing, we can see that during the upgrade process, the pod is replaced one by one through new creation and deletion.
[root@node-1 ~] # kubectl get pods-wNAME READY STATUS RESTARTS AGEnginx-app-demo-7bdfd97dcd-7t72x 1 * 145mnginx-app-demo-5cc8746f96-xsxz4 0 Pending 0 0s # New podnginx-app-demo-5cc8746f96-xsxz4 0 0snginx-app-demo-7bdfd97dcd-j6lgd 1 Pending 0 0snginx-app-demo-7bdfd97dcd-j6lgd 1 Terminating 0 14m # Delete the old pod Replace nginx-app-demo-5cc8746f96-xsxz4 0ta 1 ContainerCreating 0 0snginx-app-demo-5cc8746f96-s49nv 0 max 1 Pending 00 0s # create a second podnginx-app-demo-5cc8746f96-s49nv 0 0snginx-app-demo-5cc8746f96-s49nv 0 ContainerCreating 0 0snginx-app-demo-7bdfd97dcd-j6lgd 0 Universe 1 Terminating 0 14m # replace the second podnginx-app-demo-5cc8746f96-s49nv 1 Running 0 7snginx-app-demo-7bdfd97dcd-qtbzd 1 Terminating 0 146mnginx-app-demo-5cc8746f96-txjqh 0 0snginx-app-demo-5cc8746f96-txjqh 0 0snginx-app-demo-5cc8746f96-txjqh 1 Pending 0 0snginx-app-demo-5cc8746f96-txjqh 0/1 ContainerCreating 0 0snginx-app-demo-7bdfd97dcd-j6lgd 0/1 Terminating 0 14mnginx-app-demo-7bdfd97dcd-j6lgd 0/1 Terminating 0 14mnginx-app-demo-5cc8746f96-xsxz4 1/1 Running 0 9snginx-app-demo-5cc8746f96-txjqh 1/1 Running 0 1snginx-app-demo-7bdfd97dcd-hsrft 1/1 Terminating 0 146mnginx-app-demo-7bdfd97dcd-qtbzd 0/1 Terminating 0 146mnginx-app-demo-5cc8746f96-rcpmw 0/1 Pending 0 0snginx-app-demo-5cc8746f96-rcpmw 0/1 Pending 0 0snginx-app-demo -5cc8746f96-rcpmw 0 ContainerCreating 1 ContainerCreating 0 0snginx-app-demo-7bdfd97dcd-7t72x 1 + 1 Terminating 0 146mnginx-app-demo-7bdfd97dcd-7t72x 0 * -demo-5cc8746f96-rcpmw 1 + 1 Running 0 2snginx-app-demo-7bdfd97dcd-7t72x 0 * 147mnginx-app-demo-7bdfd97dcd-qtbzd 0/1 Terminating 0 147mnginx-app-demo-7bdfd97dcd-qtbzd 0/1 Terminating 0 147m
3. If you check the details of deployments again, you can see that deployments has replaced the new replicasets, and the original replicasets version is 1, which can be used for rollback.
[root@node-1 ~] # kubectl describe deployments nginx-app-demoName: nginx-app-demoNamespace: defaultCreationTimestamp: Sun, 11 Aug 2019 21:52:32 + 0800Labels: run=nginx-app-demoAnnotations: deployment.kubernetes.io/revision: 2 # New version number Used to roll back Selector: run=nginx-app-demoReplicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailableStrategyType: RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25 max unavailable 25% max surgePod Template: Labels: run=nginx-app-demo Containers: nginx-app-demo: Image: nginx:latest Port: 80/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Conditions: Type Status Reason-Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailableOldReplicaSets: NewReplicaSet: nginx-app-demo-5cc8746f96 (4amp 4 replicas created) # New replicaset It is actually replacing the new replicasetsEvents: Type Reason Age From Message-Normal ScalingReplicaSet 19m deployment-controller Scaled up replicaset nginx-app-demo-7bdfd97dcd to 4 Normal ScalingReplicaSet 4m51s deployment-controller Scaled up replicaset nginx-app-demo-5cc8746f96 to 1 Normal ScalingReplicaSet 4m51s deployment-controller Scaled down replica set nginx-app-demo-7bdfd97dcd to 3 Normal ScalingReplicaSet 4m51s deployment-controller Scaled up replica set nginx-app-demo-5cc8746f96 to 2 Normal ScalingReplicaSet 4m43s deployment-controller Scaled down replica set nginx-app-demo-7bdfd97dcd to 2 Normal ScalingReplicaSet 4m43s deployment-controller Scaled up replica set nginx-app-demo-5cc8746f96 to 3 Normal ScalingReplicaSet 4m42s deployment-controller Scaled down replica set nginx-app-demo-7bdfd97dcd to 1 Normal ScalingReplicaSet 4m42s deployment-controller Scaled up replica set nginx-app-demo-5cc8746f96 to 4 Normal ScalingReplicaSet 4m42s deployment-controller Scaled down replica set nginx-app-demo-7bdfd97dcd to 0
4. If you look at the version of the rolling upgrade, you can see that there are two versions, corresponding to two different replicasets
[root@node-1 ~] # kubectl rollout history deployment nginx-app-demo deployment.extensions/nginx-app-demo REVISION CHANGE-CAUSE1 2 View replicasets list. The old pod contains 0 [root@node-1 ~] # kubectl get replicasetsNAME DESIRED CURRENT READY AGEnginx-app-demo-5cc8746f96 4 4 9m2snginx-app-demo-7bdfd97dcd 0 00155m
5. Test the upgrade of the application and find that nginx has been upgraded to the latest nginx/1.17.2 version
[root@node-1] # curl-I http://10.102.1.1HTTP/1.1 200 OKServer: nginx/1.17.2 # nginx version Information Date: Sun, 11 Aug 2019 16:30:03 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue, 23 Jul 2019 11:45:37 GMTConnection: keep-aliveETag: "5d36f361-264" Accept-Ranges: bytes
6. Roll back to the old version
[root@node-1 ~] # kubectl rollout undo deployment nginx-app-demo-- to-revision=1deployment.extensions/nginx-app-demo rolled back tests the application again and has been rolled back to the old version. [root@node-1] # curl-I http://10.102.1.1HTTP/1.1 200 OKServer: nginx/1.7.9Date: Sun, 11 Aug 2019 16:34:33 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue, 23 Dec 2014 16:25:09 GMTConnection: keep-aliveETag: "54999765-264" Accept-Ranges: bytes
Written in the end: this article explores the most important concepts involved in kubernetes in a command-line way: application deployment, load balancing, auto scaling and rolling upgrade, and the actual operation in the form of command line. Readers can refer to the documentation to achieve a quick start, and most of them will be deployed in the form of yaml files and kubernets interaction.
Reference documentation
Basic concept: https://kubernetes.io/docs/tutorials/kubernetes-basics/
Deploy applications: https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/
Access to applications: https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/
External access: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
Access to applications: https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/
Rolling upgrade: https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
When your talent can't support your ambition, you should calm down and study.
Return to the kubernetes series tutorial directory
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.