In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article shows you how to use the Kubernetes application manager OpenKruise. The content is concise and easy to understand. It will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
OpenKruise
OpenKruise is a standard extension of Kubernetes, which can be used with native Kubernetes and provides more powerful and efficient capabilities for managing application containers, sidecar, image distribution, and so on.
Core function
Upgrade in place
In-place upgrade is an upgrade image capability that avoids deleting and creating new Pod. It upgrades faster and more efficiently than native Deployment/StatefulSet 's rebuilt Pod, and avoids interference with Pod to other containers that don't need to be updated.
Sidecar management
Support for defining sidecar containers in a separate CR, and OpenKruise can help you inject these Sidecar containers into all eligible Pod. This process is similar to Istio injection, but you can manage any Sidecar you care about.
Deployment across multiple availability zones
Defining a global workload that spans multiple availability zones, containers, OpenKruise will help you create a corresponding subordinate workload in each availability zone. You can uniformly manage their number of copies, versions, and even adopt different release strategies for different availability zones.
CRD list
CloneSet provides more efficient, definite and controllable application management and deployment capabilities, and supports rich strategies such as elegant in-place upgrade, specified deletion, configurable release order, parallel / grayscale release, and so on, which can meet more diversified application scenarios. Advanced StatefulSet is based on the enhanced version of native StatefulSet, and the default behavior is exactly the same as the native. In addition, it provides features such as in-place upgrade, parallel release (maximum unavailable), release pause, and so on. SidecarSet uniformly manages the sidecar container and injects the specified sidecar container into the Pod that meets the selector condition. UnitedDeployment deploys applications to multiple availability zones through multiple subset workload. BroadcastJob configures a job to run a Pod task on all eligible Node in the cluster. Advanced DaemonSet is based on the enhanced version of native DaemonSet, and the default behavior is the same as the native. In addition, it provides release strategies such as grayscale batching, selection by Node label, pause, hot upgrade, and so on. AdvancedCronJob is an extended CronJob controller, and the current template template supports configuration using Job or BroadcastJob.
The above are introduced in the official documents, this article mainly focuses on actual combat, first talk about CloneSet, other controllers will be updated one after another.
Deploy Kruise to Kubernetes cluster
Helm is used here to install Kruise
1. Now kruise Chart
Wget https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz tar-zxf kruise-chart.tgz cd kruise [root@ kruise] # ls-l total 16-rw-r--r-- 1 root root 311 Dec 20 15:09 Chart.yaml-rw-r--r-- 1 root root 4052 Dec 20 15:09 README.md drwxr-xr-x 2 root root 4096 Dec 23 10:18 templates-rw-r--r-- 1 root root 659 Dec 20 15:09 values.yaml
2. Modify values.yaml. You don't have to modify it by default.
3. Perform deployment
[root@qd01-stop-k8s-master001 kruise] # kubectl create ns kruise namespace/kruise created [root@qd01-stop-k8s-master001 kruise] # helm install kruise-n kruise-f values.yaml. W1223 10 unavailable in 22 13. 56 2088 1589994 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16, unavailable in v1.22; use apiextensions.k8s.io/v1 CustomResourceDefinition. NAME: kruise LAST DEPLOYED: Wed Dec 23 10:22:12 2020 NAMESPACE: kruise STATUS: deployed REVISION: 1 TEST SUITE: None you will see a lot of deprecated information here, because the new version of kubernetes will obsolete the version of CRD. You can modify the API version of CRD according to your own cluster version.
4. Check the status of kruise deployment
[root@qd01-stop-k8s-master001 kruise] # helm ls-n kruise NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION kruise kruise 1 2020-12-23 10 n kruise NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION kruise kruise 22 n kruise NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION kruise kruise 12.963651877 + 0800 CST deployed kruise-0.7.0 can see Some kruise crd types in the cluster [root@qd01-stop-k8s-master001 kruise] # kubectl get crd | grep kruise advancedcronjobs.apps.kruise.io 2020-12-23T02:22:13Z broadcastjobs.apps.kruise.io 2020-12-23T02:22:13Z clonesets.apps.kruise.io 2020-12-23T02:22:13Z daemonsets.apps.kruise.io 2020-12-23T02:22:13Z sidecarsets.apps.kruise.io 2020-12-23T02:22:13Z statefulsets.apps.kruise.io 2020-12-23T02:22:13Z uniteddeployments.apps.kruise.io 2020-12-23T02:22:13Z
Let's start using these managers.
CloneSet
The CloneSet controller provides the ability to efficiently manage stateless applications, it can target native Deployment, but CloneSet provides a lot of enhancements.
1. Let's first create a simple CloneSet,yaml as follows
ApiVersion: apps.kruise.io/v1alpha1 kind: CloneSet metadata: labels: app: nginx-alpine name: nginx-alpine spec: replicas: 5 selector: matchLabels: app: nginx-alpine template: metadata: labels: app: nginx-alpine spec: containers:-name: nginx image: nginx:alpine
2. Deployment
[root@qd01-stop-k8s-master001 demo] # kubectl apply-f CloneSet.yaml cloneset.apps.kruise.io/nginx-alpine created [root@qd01-stop-k8s-master001 demo] # kubectl get po | grep nginx nginx-alpine-29g7n 1 to 1 Running 0 45 s nginx-alpine-bvgqm 1 to 1 Running 0 45 s nginx-alpine-q9tlw 1 Running 0 45s nginx-alpine-s2t46 1 Running 0 44s nginx-alpine-sslvf 1 Running 044s from the output It is no different from native Deployment # Note, if get deployment cannot see the nginx-alpine application, you need get cloneset to see [root@qd01-stop-k8s-master001 demo] # kubectl get deployment [root@qd01-stop-k8s-master001 demo] # kubectl get cloneset NAME DESIRED UPDATED UPDATED_READY READY TOTAL AGE nginx-alpine 5 555 5 2m16s.
CloneSet allows users to configure the PVC template volumeClaimTemplates, which is used to generate a unique PVC for each Pod, which is not supported by Deployment. If the user does not specify this template, CloneSet creates a Pod without PVC.
3. Now create an example with a PVC template
ApiVersion: apps.kruise.io/v1alpha1 kind: CloneSet metadata: labels: app: nginx-2 name: nginx-2 spec: replicas: 5 selector: matchLabels: app: nginx-2 template: metadata: labels: app: nginx-2 spec: containers:-name: nginx image: nginx:alpine volumeMounts:-name: data-vol MountPath: / usr/share/nginx/html volumeClaimTemplates:-metadata: name: rbd spec: accessModes: ["ReadWriteOnce"] storageClassName: rbd resources: requests: storage: 2Gi
Deployment
[root@qd01-stop-k8s-master001 demo] # kubectl apply-f CloneSet.yaml cloneset.apps.kruise.io/nginx-2 created [root@qd01-stop-k8s-master001 demo] # kubectl get pv | grep data-vol pvc-0fde19f3-ea4b-47e0-81be-a8e43812e47b 2Gi RWO Delete Bound default/data-vol-nginx-2-t55h8 rbd 83s pvc-72accf10-57a6-4418- A1bc-c64633b84434 2Gi RWO Delete Bound default/data-vol-nginx-2-t49mk rbd 82s pvc-8fc8b9a5-afe8-446a-9190-08fcee0ec9f6 2Gi RWO Delete Bound default/data-vol-nginx-2-jw2zp rbd 84s pvc-c9fba396-e357- 43e8-9510-616f698da765 2Gi RWO Delete Bound default/data-vol-nginx-2-b5fdd rbd 84s pvc-e5302eab-a9f2-4a71-a5a3-4cd43205e8a0 2Gi RWO Delete Bound default/data-vol-nginx-2-l54dz rbd 84s [root@qd01 -stop-k8s-master001 demo] # kubectl get po | grep nginx nginx-2-b5fdd 1 Running 0 97s nginx-2-jw2zp 1 Running 0 97s nginx-2-l54dz 1 Running 097s nginx-2-t49mk 1/1 Running 0 96s nginx-2-t55h8 1/1 Running 0 96s
As you can see from the deployment results, each pod creates a PVC, which is not possible with native Deployment.
Note:
Each automatically created PVC has an ownerReference pointing to the CloneSet, so when the CloneSet is deleted, all Pod and PVC it creates are deleted. Each Pod and PVC created by CloneSet comes with a label of apps.kruise.io/cloneset-instance-id: xxx. The associated Pod and PVC will have the same instance-id, and their names will be suffixed with this instance-id. If a Pod is reduced and deleted by CloneSet controller, the PVC associated with the Pod will be deleted together. If a Pod is deleted or expelled by an external call directly, the PVC associated with the Pod still exists; and when the CloneSet controller finds that the quantity is insufficient to re-expand the capacity, the newly expanded Pod will reuse the instance-id of the original Pod and associate the original PVC. When the Pod is rebuilt and upgraded, the associated PVC is deleted and created along with the Pod. When the Pod is upgraded in place, the associated PVC is continuously used.
4. Specify Pod to scale down
When a CloneSet is scaled down, sometimes the user needs to specify some Pod to delete. This is not possible for StatefulSet or Deployment, because the StatefulSet deletes the Pod according to the sequence number, while the Deployment/ReplicaSet can only be deleted according to the sort defined in the controller.
CloneSet allows users to specify the name of the Pod they want to delete while reducing the number of replicas.
Now let's modify the deployment file in the above example to specify that the Pod nginx-2-t55h8 be deleted
ApiVersion: apps.kruise.io/v1alpha1 kind: CloneSet metadata: labels: app: nginx-2 name: nginx-2 spec: replicas: 4 scaleStrategy: podsToDelete:-nginx-2-t55h8
Then update the yaml file
[root@qd01-stop-k8s-master001 demo] # kubectl apply-f CloneSet.yaml cloneset.apps.kruise.io/nginx-2 configured [root@qd01-stop-k8s-master001 demo] # kubectl get po | grep nginx nginx-2-b5fdd 1 Running 0 11m nginx-2-jw2zp 1 Running 0 11m nginx-2- L54dz 1/1 Running 0 11m nginx-2-t49mk 1/1 Running 0 11m
Now, if you look at the input result, there is no Pod like nginx-2-t55h8.
This feature is useful, for example, if a machine breaks down or the load is too high, you want to delete the specified pod.
5. Upgrade function
CloneSet provides the same three upgrade methods as Advanced StatefulSet. The default is ReCreate: ReCreate: the controller deletes the old Pod and its PVC, and then recreates it with the new version. InPlaceIfPossible: the controller will first try to upgrade the Pod in place, and then use the rebuild upgrade if not. Currently, fields such as spec.template.metadata.* and spec.template.spec.containers [x] .image can only be upgraded in place. InPlaceOnly: the controller is only allowed to be upgraded in place. Therefore, users can only modify the restricted fields in the previous article, and if they try to modify other fields, they will be rejected by Kruise.
Now let's try to upgrade the Pod function in place, upgrading the nginx image from nginx:alpine to nginx:latest.
First of all, modify the yaml file, here only paste out the modified part of the file
ApiVersion: apps.kruise.io/v1alpha1 kind: CloneSet... Spec: replicas: 4 updateStrategy: type: InPlaceIfPossible inPlaceUpdateStrategy: gracePeriodSeconds: 10. Spec: containers:-name: nginx image: nginx
Perform upgrad
[root@qd01-stop-k8s-master001 demo] # kubectl apply-f CloneSet.yaml cloneset.apps.kruise.io/nginx-2 configured use kubectl describe to view the upgrade process Events: Type Reason Age From Message -Warning FailedScheduling 59m default-scheduler 0Compact 22 nodes are available: 22 pod has unbound immediate PersistentVolumeClaims. Warning FailedScheduling 59m default-scheduler 0/22 nodes are available: 22 pod has unbound immediate PersistentVolumeClaims. Warning FailedScheduling 59m default-scheduler 0/22 nodes are available: 22 pod has unbound immediate PersistentVolumeClaims. Normal Scheduled 59m default-scheduler Successfully assigned default/nginx-2-l54dz to qd01-stop-k8s-node007.ps.easou.com Normal SuccessfulAttachVolume 59m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-e5302eab-a9f2-4a71-a5a3-4cd43205e8a0" Normal Pulling 58m kubelet Pulling image "nginx:alpine" Normal Pulled 58m kubelet Successfully pulled image "nginx:alpine" in 6.230045975s Normal Killing 55s kubelet Container nginx definition changed Will be restarted Normal Pulling 55s kubelet Pulling image "nginx" Normal Pulled 26s kubelet Successfully pulled image "nginx" in 29.136659264s Normal Created 23s (x2 over 58m) kubelet Created container nginx Normal Started 23s (x2 over 58m) kubelet Started container nginx
As you can see from the output, Container nginx definition changed and will be restarted,Pod did not delete the image file on the basis of the reconstruction, but directly updated the image file and restarted the service.
Upgrading in place reduces deletion and reconstruction, saves upgrade time and resource scheduling frequency.
6. Partition batch grayscale
The semantics of Partition is to retain the number or percentage of older versions of Pod, which defaults to 0. The partition here does not represent any order sequence number.
Partition is set during the release process: if it is a number, the controller will update the (replicas-partition) number of Pod to the latest version. If it is a percentage, the controller updates (replicas * (100%-partition)) the number of Pod to the latest version.
Now I update the image of the above example to nginx:1.19.6-alpine and set partition=3
Kind: CloneSet metadata: labels: app: nginx-2 name: nginx-2 spec: replicas: 5 updateStrategy: type: InPlaceIfPossible inPlaceUpdateStrategy: gracePeriodSeconds: 10 partition: 3 selector: matchLabels: app: nginx-2 template: metadata: labels: app: nginx-2 spec: containers:-name: nginx image: nginx:1.19.6-alpine
View the result
Status: Available Replicas: 5 Collision Count: 0 Label Selector: app=nginx-2 Observed Generation: 6 Ready Replicas: 5 Replicas: 5 Update Revision: nginx-2-7b44cb9c8 Updated Ready Replicas: 2 Updated Replicas: 2 Events: Type Reason Age From Message-- -Normal SuccessfulUpdatePodInPlace 45m cloneset-controller successfully update pod nginx-2-l54dz in-place (revision nginx-2- 5879fd9f7) Normal SuccessfulUpdatePodInPlace 44m cloneset-controller successfully update pod nginx-2-t49mk in-place (revision nginx-2- 5879fd9f7) Normal SuccessfulUpdatePodInPlace 43m cloneset-controller successfully update pod nginx-2-b5fdd in-place (revision nginx-2- 5879fd9f7) Normal SuccessfulUpdatePodInPlace 43m cloneset-controller successfully update pod nginx-2-jw2zp in-place (revision nginx-2- 5879fd9f7) Normal SuccessfulCreate 22m cloneset-controller succeed to create pod nginx-2-zpp8z Normal SuccessfulUpdatePodInPlace 5m22s cloneset-controller successfully update pod nginx-2-zpp8z in-place (revision nginx-2- 7b44cb9c8) Normal SuccessfulUpdatePodInPlace 4m55s cloneset-controller successfully update pod nginx-2-jw2zp in-place (revision nginx-2- 7b44cb9c8) [root@qd01-stop-k8s-master001 Demo] # kubectl get pod-L controller-revision-hash NAME READY STATUS RESTARTS AGE CONTROLLER-REVISION-HASH nginx-2-b5fdd 1 nginx-2- 5879fd9f7 nginx-2-jw2zp 1 Running 1 99m nginx-2- 5879fd9f7 nginx-2-jw2zp 1 Running 2 99m nginx -2-7b44cb9c8 nginx-2-l54dz 1 nginx-2- 5879fd9f7 nginx-2-t49mk 1 Running 1 99m nginx-2- 5879fd9f7 nginx-2-zpp8z 1 Running 1 99m nginx-2- 7b44cb9c8 1 Running 1 19m nginx-2- 7b44cb9c8
From the output, we can see that Update Revision has been updated to nginx-2-7b44cb9c8, while there are only two Pod upgrades in Pod.
Because we set up partition=3, the controller only upgraded 2 Pod.
Partition batch grayscale function improves the original Pod upgrade mode, making the upgrade more flexible and grayscale online. Awesome.
7. Pause the release in the final demonstration.
Users can suspend publishing by setting paused to true, but the controller will still manage the number of replicas:
First of all, we change the image in the example to nginx:1.18.0 and set the number of copies to 10. Update yaml after modification. The running result is as follows:
[root@qd01-stop-k8s-master001 demo] # kubectl get po-ointjsonpathpaths'{range. Items [*]} {"\ n"} {":\ t"} {":\ t"} {range .spec.containers [*]} {.image} {","} {end} {end}'| sort nginx-2- 7lzx9: nginx:1.18.0, nginx-2-b5fdd: nginx:1.18.0, nginx-2-jw2zp: nginx:1.18.0 Nginx-2-l54dz: nginx:1.18.0, nginx-2-nknrt: nginx:1.18.0, nginx-2-rgmsc: nginx:1.18.0, nginx-2-rpr5z: nginx:1.18.0, nginx-2-t49mk: nginx:1.18.0, nginx-2-v2bpx: nginx:1.18.0, nginx-2-zpp8z: nginx:1.18.0
Now let's modify the yaml file and change the image to nginx:alpine to perform the update, running as follows
[root@qd01-stop-k8s-master001 demo] # kubectl get po-ointjsonpathpaths'{range. Items [*]} {"\ n"} {":\ t"} {":\ t"} {range .spec.containers [*]} {.image} {","} {end} {end}'| sort nginx-2- 7lzx9: nginx:1.18.0, nginx-2-b5fdd: nginx:1.18.0, nginx-2-jw2zp: nginx:1.18.0 Nginx-2-l54dz: nginx:1.18.0, nginx-2-nknrt: nginx:alpine, nginx-2-rgmsc: nginx:alpine, nginx-2-rpr5z: nginx:alpine, nginx-2-t49mk: nginx:1.18.0, nginx-2-v2bpx: nginx:alpine, nginx-2-zpp8z: nginx:1.18.0
Now you can see that the image of four pod has been updated to nginx:alpine, and then we modify the yaml file again to add paused: true
Spec: replicas: 10 updateStrategy: paused: true type: InPlaceIfPossible inPlaceUpdateStrategy: gracePeriodSeconds: 10
Execute apply again, update yaml, check the update progress again, and find that pod has not continued to update, and image upgrade has been suspended.
[root@qd01-stop-k8s-master001 demo] # kubectl get po-ointjsonpathpaths'{range. Items [*]} {"\ n"} {":\ t"} {":\ t"} {range .spec.containers [*]} {.image} {","} {end} {end}'| sort nginx-2- 7lzx9: nginx:1.18.0, nginx-2-b5fdd: nginx:1.18.0, nginx-2-jw2zp: nginx:1.18.0 Nginx-2-l54dz: nginx:1.18.0, nginx-2-nknrt: nginx:alpine, nginx-2-rgmsc: nginx:alpine, nginx-2-rpr5z: nginx:alpine, nginx-2-t49mk: nginx:1.18.0, nginx-2-v2bpx: nginx:alpine, nginx-2-zpp8z: nginx:1.18.0
Finally, cancel paused: true, apply yaml the file again, and the upgrade will continue.
[root@qd01-stop-k8s-master001 demo] # kubectl get po-obsolete jsonpathies'{range. Items [*]} {"\ n"} {":\ t"} {range .spec.containers [*]} {.image} {","} {end} {end}'| sort nginx-2- 7lzx9: nginx:alpine, nginx-2-b5fdd: nginx:alpine, nginx-2-jw2zp: nginx:alpine, nginx-2-l54dz: nginx:alpine Nginx-2-nknrt: nginx:alpine, nginx-2-rgmsc: nginx:alpine, nginx-2-rpr5z: nginx:alpine, nginx-2-t49mk: nginx:alpine, nginx-2-v2bpx: nginx:alpine, nginx-2-zpp8z: nginx:alpine
The above is a demonstration of the entire release pause, and the benefit of this feature is that we can interrupt the upgrade at any time during the upgrade process.
In addition, CloneSet has many features, such as: maximum unavailable number of MaxUnavailable, maximum elastic number of MaxSurge, upgrade sequence, fragmentation strategy, lifecycle hook, and so on.
The above is how to use the Kubernetes application manager OpenKruise. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.