Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

StatefulSet

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

StatefulSet

StatefulSet:Pod controller.

​ RC,RS,Deployment,DS . -> stateless service.

​ template (template): according to the Pod created by the template, their status is exactly the same (except for name, IP, domain name)

​ can be understood as: any Pod can be deleted and replaced with a newly generated Pod.

Stateful services: related events in previous or multiple communications need to be recorded as a classification standard for the following communications. For example: database services such as mysql. The name of Pod cannot be changed at will. The directory for data persistence is also different, and each Pod has its own unique data persistence directory.)

​ mysql: master-slave relationship.

If you compare the previously stateless service to cattle, sheep and other livestock. Compare statefulness to a pet.

Each Pod corresponds to a PVC, and each PVC corresponds to a PV.

​ storageclass: automatically creates a PV.

​ needs to solve: automatically create PVC- > volumeClaimTemplates

[root@master ~] # vim statefulset.yamlapiVersion: v1kind: Servicemetadata: name: headless-svc labels: app: ports:-port: 80 selector: app: headless-pod clusterIP: None---apiVersion: apps/v1kind: StatefulSetmetadata: name: statefulset-testspec: serviceName: headless-svc replicas: 3 selector: matchLabels: app: headless-pod template: metadata: labels: app: headless-pod spec: containers :-name: myhttpd image: httpd ports:-containerPort: 80 [root@master ~] # kubectl apply-f statefulset.yaml service/headless-svc createdstatefulset.apps/statefulset-test created

Deployment:Deeployment+RS+ random string (the name of the Pod. There is no order. Can be replaced at will.

1.headless-svc: headless service. Because there is no IP address, it does not have the function of load balancing. Because statefulset requires that the name of the Pod be ordered, each Pod cannot be replaced at will, that is, even after the Pod is rebuilt, the name remains the same. Name each Pod at the back end.

2.statefulset: define specific applications

3.volumeClaimTemplates: PVC can be created automatically to provide proprietary storage for the back end.

First, create a StorageClass resource object.

​ 1. Based on the NFS service, create a NFS service.

[root@master] # showmount-e

Export list for master:

/ nfsdata *

​ 2. Create rbac permissions.

[root@master ~] # mkdir yaml [root@master ~] # cp rbac-rolebind.yaml yaml/ [root@master ~] # cd yaml/ [root@master yaml] # vim rbac-rolebind.yaml apiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: nfs-provisioner-runnerrules:-apiGroups: ["] resources: [" persistentvolumes "] verbs: [" get "," list "," watch " "create", "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"]-apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get", "create", "list", "watch" "update"]-apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisionerroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io error report: [root@master yaml] # kubectl Apply-f rbac-rolebind.yaml serviceaccount/nfs-provisioner createdclusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner unchangedThe ClusterRoleBinding "run-nfs-provisioner" is invalid: subjects [0] .namespace: Required value troubleshooting: 38 lines add: namespace: default successful: [root@master yaml] # kubectl apply-f rbac-rolebind.yaml serviceaccount/nfs-provisioner unchangedclusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner unchangedclusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner configured

3. Create a Deployment resource object and use Pod instead of the real

[root@master ~] # cp nfs-deployment.yaml yaml/ [root@master ~] # cd yaml/ [root@master yaml] # vim nfs-deployment.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisionerspec: apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-client-provisionerspec: replicas: 1 strategy: type: Recreate template: Metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers:-name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner volumeMounts:-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_ NAME value: bdqn-name: NFS_SERVER value: 192.168.1.10-name: NFS_PATH value: / nfsdata volumes:-name: nfs-client-root nfs: server: 192.168.1.10 path: / nfsdata [root@master yaml] # kubectl apply-f nfs- Deployment.yaml deployment.extensions/nfs-client-provisioner created

4. Create storageclass

[root@master ~] # cp test-storageclass.yaml yaml/ [root@master ~] # cd yaml/ [root@master yaml] # vim test-storageclass.yaml apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: sc-nfsprovisioner: bdqnreclaimPolicy: Retain delete the previously created sc: [root@master yaml] # kubectl delete sc sc-nfs storageclass.storage.k8s.io "sc-nfs" deleted to see if it has been deleted: [root@master yaml] # kubectl get scNo resources found. Run the file again: [root@master yaml] # kubectl apply-f test-storageclass.yaml storageclass.storage.k8s.io/sc-nfs created

Second, solve the problem of automatically creating PVC

Delete the previously created statefulset first

[root@master yaml] # kubectl delete-f statefulset.yaml

Service "headless-svc" deleted

Statefulset.apps "statefulset-test" deleted

Then modify the file in statefulset.yaml:

[root@master yaml] # vim statefulset.yamlapiVersion: v1kind: Servicemetadata: name: headless-svc labels: app: ports:-port: 80 name: myweb selector: app: headless-pod clusterIP: None---apiVersion: apps/v1kind: StatefulSetmetadata: name: statefulset-testspec: serviceName: headless-svc replicas: 3 selector: matchLabels: app: headless-pod template: metadata: labels: app: headless-pod spec: Containers:-image: httpd name: myhttpd ports:-containerPort: 80 name: httpd volumeMounts:-mountPath: / mnt name: test volumeClaimTemplates:-metadata: name: test annotations: / / this is the specified storageclass volume.beta.kubernetes.io/storage-class: sc-nfs spec: accessModes:-ReadWriteOnce Resources: requests: storage: 100Mi generate: [root@master yaml] # kubectl apply-f statefulset.yaml service/headless-svc createdstatefulset.apps/statefulset-test created

Verify that it can be used: [root@master yaml] # kubectl exec-it statefulset-test-0 / bin/sh# cd / mnt# touch testfile# exit [root@master yaml] # ls / nfsdata/default-test-statefulset-test-0-pvc-2fd45b61-6c69-4901-80da-66184e220b6f/testfile

Expand:

Create a namespace with your own name in which all of the following resources run. Running a httpd web service with statefuset resources requires 3 Pod, but the main interface content of each Pod is different, and all need to do proprietary data persistence. Try to delete one of the Pod, view the newly generated Pod, and summarize and compare with the previous Pod controlled by the Deployment resource controller. What is the difference?

(1) create a StorageClass resource object.

Note: the nfs service needs to be enabled

1. Create the yaml file of namespace

[root@master yaml] # vim namespace.yaml

Kind: Namespace

ApiVersion: v1

Metadata:

Name: name of xgp-lll # namespave

Execute it.

[root@master yaml] # kubectl apply-f namespace.yaml

Check it out

[root@master yaml] # kubectl get namespaces

Create rbac permissions.

[root@master yaml] # vim rbac-rolebind.yaml

ApiVersion: v1

Kind: ServiceAccount

Metadata:

Name: nfs-provisioner

Namespace: xgp-lll

ApiVersion: rbac.authorization.k8s.io/v1

Kind: ClusterRole

Metadata:

Name: nfs-provisioner-runner

Namespace: xgp-lll

Rules:

ApiGroups: ["]

Resources: ["persistentvolumes"]

Verbs: ["get", "list", "watch", "create", "delete"] apiGroups: ["]

Resources: ["persistentvolumeclaims"]

Verbs: ["get", "list", "watch", "update"] apiGroups: ["storage.k8s.io"]

Resources: ["storageclasses"]

Verbs: ["get", "list", "watch"] apiGroups: ["]

Resources: ["events"]

Verbs: ["watch", "create", "update", "patch"] apiGroups: ["]

Resources: ["services", "endpoints"]

Verbs: ["get", "create", "list", "watch", "update"] apiGroups: ["extensions"]

Resources: ["podsecuritypolicies"]

ResourceNames: ["nfs-provisioner"]

Verbs: ["use"]

Kind: ClusterRoleBinding

ApiVersion: rbac.authorization.k8s.io/v1

Metadata:

Name: run-nfs-provisioner

Subjects:

Kind: ServiceAccount

Name: nfs-provisioner

Namespace: xgp-lll

RoleRef:

Kind: ClusterRole

Name: nfs-provisioner-runner

ApiGroup: rbac.authorization.k8s.io

Execute it.

[root@master yaml] # kubectl apply-f rbac-rolebind.yaml

3. Create a Deployment resource object and replace the real NFS service with Pod.

[root@master yaml] # vim nfs-deployment.yaml

ApiVersion: extensions/v1beta1

Kind: Deployment

Metadata:

Name: nfs-client-provisioner

Namespace: xgp-lll

Spec:

Replicas: 1

Strategy:

Type: Recreate

Template:

Metadata:

Labels:

App: nfs-client-provisioner

Spec:

ServiceAccount: nfs-provisioner

Containers:

Name: nfs-client-provisioner

Image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner

VolumeMounts:name: nfs-client-root

MountPath: / persistentvolumes

Env:name: PROVISIONER_NAME

Value: xgpname: NFS_SERVER

Value: 192.168.1.21name: NFS_PATH

Value: / nfsdata

Volumes:name: nfs-client-root

Nfs:

Server: 192.168.1.21

Path: / nfsdata

Execute it.

[root@master yaml] # kubectl apply-f nfs-deployment.yaml

Check it out

[root@master yaml] # kubectl get pod-n xgp-lll

4. Create the yaml file of storageclass

[root@master yaml] # vim test-storageclass.yaml

ApiVersion: storage.k8s.io/v1

Kind: StorageClass

Metadata:

Name: stateful-nfs

Namespace: xgp-lll

Provisioner: xgp # is associated to the above Deploy through the provisioner field

ReclaimPolicy: Retain

Execute it.

[root@master yaml] # kubectl apply-f test-storageclass.yaml

Check it out

[root@master yaml] # kubectl get sc-n xgp-lll

(2) solve the problem of automatically creating pvc

1. Create the yaml file of statefulset

ApiVersion: v1

Kind: Service

Metadata:

Name: headless-svc

Namespace: xgp-lll

Labels:

App: headless-svc

Spec:

Ports:

Port: 80

Name: myweb

Selector:

App: headless-pod

ApiVersion: apps/v1

Kind: StatefulSet

Metadata:

Name: statefulset-test

Namespace: xgp-lll

Spec:

ServiceName: headless-svc

Replicas: 3

Selector:

MatchLabels:

App: headless-pod

Template:

Metadata:

Labels:

App: headless-pod

Spec:

Containers:image: httpd

Name: myhttpd

Ports:containerPort: 80

Name: httpd

VolumeMounts:mountPath: / usr/local/apache2/htdocs

Name: test

VolumeClaimTemplates: # > automatically create PVC to provide proprietary > storage for backend Pod. * * metadata:

Name: test

Annotations: # this is the specified storageclass

Volume.beta.kubernetes.io/storage-class: stateful-nfs

Spec:

AccessModes:ReadWriteOnce

Resources:

Requests:

Storage: 100Mi

Execute it.

[root@master yaml] # kubectl apply-f statefulset.yaml

Check it out

[root@master yaml] # kubectl get pod-n xgp-lll

2. Verify the data storage

Create a file in the container

First

[root@master yaml] # kubectl exec-it-n xgp-lll statefulset-test-0 / bin/bash

Root@statefulset-test-0:/usr/local/apache2# echo 123 > / usr/local/apache2/htdocs/index.html

The second one.

[root@master yaml] # kubectl exec-it-n xgp-lll statefulset-test-1 / bin/bash

Root@statefulset-test-2:/usr/local/apache2# echo 456 > / usr/local/apache2/htdocs/index.html

Third

[root@master yaml] # kubectl exec-it-n xgp-lll statefulset-test-2 / bin/bash

Root@statefulset-test-1:/usr/local/apache2# echo 789 > / usr/local/apache2/htdocs/index.html

Check the host computer.

First

[root@master yaml] # cat / nfsdata/xgp-lll-test-statefulset-test-0-pvc-ccaa02df-4721-4453-a6ec-4f2c928221d7/index.html

one hundred and twenty three

The second one.

[root@master yaml] # cat / nfsdata/xgp-lll-test-statefulset-test-1-pvc-88e60a58-97ea-4986-91d5-a3a6e907deac/index.html

four hundred and fifty six

Third

[root@master yaml] # cat / nfsdata/xgp-lll-test-statefulset-test-2-pvc-4eb2bbe2-63d2-431a-ba3e-b7b8d7e068d3/index.html

seven hundred and eighty nine

Make a visit.

The difference and selection between kubernetes (k8s) StatefulSet and Deployment

Access method:

Compare Deployment & StatefulSet

To sum up:

If it is a deployment that does not require additional data dependencies or state maintenance, or if replicas is 1, Deployment is preferred

If you simply want to do data persistence to prevent pod from going down and restarting, then you can use pv/pvc.

If you want to communicate between app and do not need to expose to the public, you can use headlessService

If you need to use service load balancer, do not use StatefulSet, try to use clusterIP type, use serviceName for forwarding

If there are multiple replicas and multiple pv needs to be mounted and the data of each pv is different, because there is an one-to-one correspondence between pod and pv. If a pod is hung up and then restarted, you need to connect to the previous pv and cannot connect to another pv. Consider using StatefulSet.

If you don't need StatefulSet, don't use it.

You can only use StatefulSet:

Recently, the service was deployed on Microsoft's aks platform. Because Deployment needs to apply for volume dynamically when it is in scale, it uses the volumeClaimTemplates attribute to apply. The current Deployment object (1.15) does not support this attribute, only StatefulSet has it, so we have to use the latter. It seems to be putting the cart before the horse for now, but it doesn't rule out the possibility that k8s will support this attribute in the future.

Note:

If you use StatefulSet,spec.serviceName, you need to point to headlessServiceName, and the specified steps cannot be omitted. The official document requires that headlessService must be created before creating the StatefulSet. After testing, if not, it will not affect the running of pod (pod or running status), but cannot have a stable-network-id cluster to access this service (if the service does not need to be discovered, only other services need to be found. The official requirement is to create a StatefulSet before creating a StatefulSet, so that the pod can automatically correspond to the service when it starts.

The reason for specifying a headlessService is that admin can create multiple, multiple types of service,k8s for StatefulSet without knowing which service name to use as part of the domain name in the cluster.

The Deployment type cannot have this parameter, otherwise an error is reported.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report