In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
StatefulSet (state set)
1. What is StatefulSet?
StatefulSet, also known as PetSet (formerly known as PetSet), is a pod controller like RS,RC,Deployment and others.
StatefulSet is designed to solve the problem of stateful service (corresponding to deoloyment and RS,RC,ReplicaSet are designed for stateless service).
What is a stateless service?
In the production environment, the name of pod is random, there is no regular time for capacity expansion and reduction, and each pod can be replaced by a newly generated pod.
2 the application scenarios of Magi StatefulSet include:
Stable persistent storage: that is, pod can still access the same persistent data after rescheduling, which is based on PVC. Stable network sign: that is, the pod name and host name of pod remain unchanged after rescheduling, which is based on Headless Service (that is, service without cluster ip). Orderly deployment, orderly expansion: that is, pod is sequentially deployed or expanded according to the defined order (that is, from 0 to Nmuri 1, all Pod before the next Pod must be Running and Ready state), based on init containers. Orderly contraction, orderly deletion: (that is, Nmuri 1 to 0)
As you can see from the above reference scenario, statefulset consists of the following parts:
1) headless Service: headless service. Used to define the network identity of the pod (domain name resolution).
2) statefulSet: define specific applications
3) volumeClaimTemplate: this template automatically creates a pvc for each pod.
Compared with stateless service, summarize the characteristics of StatefulSet: the name of pod remains the same, each copy has the order of start and stop, persistent storage, the data in each pod is different, and the pvc is automatically created according to the defined template.
3. Next, use examples to practice the use of statefulset resources.
Deploy nginx services to persist data and other operations of stateful services through storage class and statefulset.
The operation flow is as follows: 1) deploy storage class through nfs service (create pv) 2) create statefulset resource object (create pvc) 3) persistence of test data 4) replicas expansion and reduction 5) Partition update
1) deploy storage class through nfs
The following is directly deployed. For more information and parameter explanation, please refer to the StorageClass of k8s in the previous chapter
# enable nfs:
[root@master yaml] # yum-y install nfs-utils [root@master yaml] # vim / etc/exports/nfsdata * (rw,sync,no_root_squash) [root@master yaml] # mkdir / nfsdata [root@master yaml] # systemctl start rpcbind [root@master yaml] # systemctl start nfs-server [root@master yaml] # systemctl enable nfs-server [root@master yaml] # showmount-eExport list for master:/nfsdata *
# create rbac permissions:
[root@master yaml] # vim rbac-rolebind.yaml apiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner namespace: default---apiVersion: ClusterRolemetadata: name: nfs-provisioner-runner namespace: defaultrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create" "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"]-apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get", "create", "list", "watch" "update"]-apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io
Run the yaml file.
# create nfs-Deployment:
[root@master yaml] # vim nfs-deployment.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-client-provisioner namespace: defaultspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers:-name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner VolumeMounts:-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: nfs-deploy # name of the supplier (custom)-name: NFS_SERVER value: 172.16.1.30 # ip address of the nfs server-name: NFS_ PATH value: / nfsdata # nfs shared directory volumes:-name: nfs-client-root nfs: server: 172.16.1.30 path: / nfsdata
# create storage class:
[root@master yaml] # vim sc.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: statefu-nfs namespace: defaultprovisioner: nfs-deploy reclaimPolicy: Retain// after running the yaml file, view the information of sc: [root@master yaml] # kubectl get scNAME PROVISIONER AGEstatefu-nfs nfs-deploy 48s
2) create a statefulset resource:
[root@master yaml] # vim statefulset.yamlapiVersion: v1kind: Service metadata: name: headless-svc # defines headless service Need to define label labels: app: headless-svcspec: ports:-name: testweb port: 80 clusterIP: None # need to define cluster ip as none selector: app: nginx # the tag specified here needs to be defined later-apiVersion: apps/v1kind: StatefulSet metadata: name: sfs-web spec: serviceName: headless-svc # the service name selected here is the headless service name defined above: 3 selector: matchLabels: app: nginx # Select tag template: metadata: labels: app: nginx # define tag spec: containers:-name: nginx image: nginx ports:-containerPort: 80 volumeMounts: # define data persistence-name: test-nginx mountPath: / usr/share/nginx/html # Mount the path into the container volumeClaimTemplates: # create pvc-metadata: name: test-nginx annotations: volume.beta.kubernetes.io/storage-class: statefu-nfs # for each pod through this field (template). Select the previously created storage class here The name should be consistent with spec: accessModes:-ReadWriteOnce # using ReadWriteOnce access mode resources: requests: storage: 100Mi # request 100m space [root@master yaml] # kubectl apply-f statefulset.yaml service/headless-svc unchangedstatefulset.apps/sfs-web created
/ / after running successfully, check the status of the created headless service and statefulset:
/ / View the details of svc:
/ / View the created pod: (ensure normal operation)
You can see that each pod is ordered, and their names (domain names) will be 0pm 1pm 2. Sort it in turn.
# according to what we mentioned above, pv will be created through sc, and pvc will be created automatically through the template in sts. Let's check the pv and pvc in the cluster:
We can see that pv and pvc are created successfully, the state is already Bound, the access mode is RWO (the pattern defined in sc), and the recycling policy Delete is created dynamically through sc and sts, not manually.
3) persistence of test data
# check whether mounting to the nfs server will generate a shared directory for each pvc:
# enter a pod directory to create a test web page file:
[root@master yaml] # cd / nfsdata/default-test-nginx-sfs-web-0-pvc-4ef5e77e-1198-4ccc-81a7-db48d8a75023/ [root@master default-test-nginx-sfs-web-0-pvc-4ef5e77e-1198-4ccc-81a7-db48d8a75023] # echo "hello world" > index.html [root@master default-test-nginx-sfs-web-0-pvc-4ef5e77e-1198-4ccc-81a7-db48d8a75023] # lltotal 4Murray Rafael-1 root root 21 Jan 12 17:13 index.html
# Let's go to pod to check whether the file is mounted successfully:
# next, we delete the pod:
Verify that the regenerated pod is replaced and the data is lost?
Through the above test results, we can know that the stateful service is different from the stateless service. The regenerated pod name will not be changed, even if deleted, it will not be overwritten by the newly generated pod, and the data in the original pod will also exist.
4) capacity expansion and reduction of replicas:
(1) expansion operation:
[root@master yaml] # vim statefulset.yaml
/ / rerun the yaml file to view the newly generated pod:
(2) scale-down operation: (reduce in reverse order)
/ / rerun the service to check the pod status:
Through the above expansion and reduction operations, we can know that statefulset resources can be deployed in an orderly manner in the production environment, that is, orderly expansion and contraction, and the start and stop of pod are in a certain order.
5) Partition update operation:
The main purpose of partition update is to do a partition for pod, so that we can update some part of all pod or selective pod of an application.
# before updating, let's take a look at the updated parameters:
[root@master yaml] # kubectl explain sts.spec.updateStrategy.rollingUpdate KIND: StatefulSetVERSION: apps/v1RESOURCE: rollingUpdate DESCRIPTION: RollingUpdate is used to communicate parameters when Type is RollingUpdateStatefulSetStrategyType. RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType.FIELDS: partition Partition indicates the ordinal at which the StatefulSet should be partitioned. Default value is 0.
# based on the above parameters, we partition the pod and upgrade the version of the pod of the specified partition:
We set the number of replicas to 8, and define the partition update, which means that the update starts from the fourth pod (partition) (including the fourth partition), and the previous pod will not update.
Note: if you do not specify a partition, the default partition starts at 0.
# version upgrade will not be tested, just change the image to a new nginx version. After running the service again, you can see that the pod image version starting from the fourth pod (including the fourth version) will be upgraded, while the previous partition pod version will not change.
4Grammer StatefulSet restriction
1) the resource object is still in beta (test) state and needs kubernetes v1.5 or above to support it. 2) all volume of pod must be created by pv or administrator in advance. 3) to ensure data security, volume will not be deleted when statefulset is deleted. 4) statefulset needs a Headless Service service to define the DNS domin, which needs to be created before the statefulset. 5) at present, the function of statefulset is not perfect, for example, the update operation above still needs to be solved manually.
This is the understanding and practice of statefulset resources. If the application does not require any stable identifiers, orderly deployment, deletion, and scale, use stateless controllers such as Deployment or RS to deploy. On the contrary, the statefulset controller is used in production to ensure that the relationship between pod and volume will not be broken, even if the pod is hung up, the previously mounted disk can still be used.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.