Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction to storage classes of kubernetes

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

K8s has a lot of services and many resource objects.

If you want to create services and persist data, you need to know in advance what PV is available?

If we create PV in advance for this service, then we also need to know, how much space does this service need?

Environment introduction host IP address service master192.168.1.21k8snode01192.168.1.22k8snode02192.168.1.23k8s

The experiment based on [https://blog.51cto.com/14320361/2464655]()] continues

Introduction to storage class

By providing different storage classes, Kubernetes cluster administrators can meet the storage needs of users with different quality of service levels, backup policies and arbitrary policies. Dynamic storage volume provisioning is implemented using StorageClass, which allows storage volumes to be created on demand. If there is no dynamic storage provisioning, the administrator of the Kubernetes cluster will have to create new storage volumes manually. By storing volumes dynamically, Kubernetes will be able to automatically create the storage it needs according to its users' needs.

The whole process of dynamic storage provisioning based on StorageClass is shown in the following figure:

1) Cluster administrator creates storage classes in advance (StorageClass)

2) users create persistent storage declarations that use storage classes (PVC:PersistentVolumeClaim)

3) the storage persistence declaration informs the system that it needs a persistent storage (PV: PersistentVolume)

4) the system reads the information of the storage class

5) based on the information of the storage class, the system automatically creates the PV needed by PVC in the background.

6) the user creates a Pod using PVC

7) applications in Pod persist data through PVC

8) PVC uses PV for the final persistence of the data.

First, let's take a brief look at the process of implementing this picture, and then let's take a look at it.

In the previous words, static supply will require us to create pv manually. If we do not have enough resources and cannot find a suitable pv, then pod will be in a state of pending waiting, that is to say, we cannot find a suitable partner. So to solve these two problems, we will give this kind of dynamic supply, mainly to help you create pv automatically.

That is, how much capacity you need, how much capacity will be automatically created for you, that is, pv,k8s will create for you, and when you create pvc, you need to find pv. At this time, it will be handed over to this storage class, and storage class, to help you create these pv, storage class, is to achieve a support for specified storage, directly help you to call api to create storage class, so there is no need to manually help you create pv.

And you think about it, when there are more nodes and more business, it is still a large amount to manually create pv, and it is not very easy to maintain.

One of the main implementations of dynamic provisioning is the StorageClass storage object, which actually declares which storage you use, then connects it for you, and then automatically creates pv for you.

For example, it's easier to understand.

Don't talk too much about the following picture.

In fact, it is a pv supply based on NFS implementation. It roughly goes like this. We may create a statefulset stateful application storage, and then have a management nfs-storageClass, because nfs currently does not support this automatic creation of pv. We can use the plug-in implemented by the community to complete the automatic creation of this pv, that is, the StorageClass piece. After the creation is finished, then pod will refer to it.

One, Storage Class (storage class)

Function: it can dynamically and automatically create the required PV

Provisioner (supplier, provider): and a storage system that provides storage resources. There are multiple suppliers built into K8s, whose names are prefixed with "kubernetes.io". And it can be customized.

Parameters (parameters): the storage class uses parameters to describe the storage volume to be associated with. Note that the parameters vary from supplier to supplier.

ReclaimPlicy: PV's recycling policy. Available values are Delete (default) and Retain

(1) determine the SC based on NFS service. NFS opens [root@master yaml] # showmount-e

(2) RBAC permission is required.

RBAC:rbac is the security policy of API of K8s, which is based on the control of users' access rights. It specifies who and what kind of authority you can have.

To give the SC resource permission to operate the K8s cluster.

[root@master yaml] # vim rbac-rolebind.yamlkind: NamespaceapiVersion: v1metadata: name: bdqn-test---apiVersion: ServiceAccountmetadata: name: nfs-provisioner namespace: bdqn-test---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: nfs-provisioner-runner namespace: bdqn-testrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create" "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"]-apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get", "create", "list", "watch" "update"]-apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisioner namespace: bdqn-testroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io

Run it.

[root@master yaml] # kubectl apply-f rbac-rolebind.yaml (3) nfs-deployment

Purpose: it is actually a NFS client. But it mounts the remote NFS server to the local directory through K8S's built-in NFS driver; then it associates itself with storage class as a storage provider.

[root@master yaml] # vim nfs-deployment.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-client-provisioner namespace: bdqn-testspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner # specify the account containers:-name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open- Ali/nfs-client-provisioner volumeMounts:-name: nfs-client-root mountPath: / persistentvolumes # specify the mount directory in the container env:-name: PROVISIONER_NAME # this is the built-in variable value: bdqn-test # this is the value of the above variable (name )-name: NFS_SERVER # built-in variable IP value to specify the nfs service: 192.168.1.21-name: NFS_PATH # built-in variable Specify the directory value: / nfsdata volumes: # of the nfs share, which specifies the path of the nfs mounted above into the container and IP-name: nfs-client-root nfs: server: 192.168.1.21 path: / nfsdata

Execute it.

[root@master yaml] # kubectl apply-f nfs-deployment.yaml (4) create storageclass [root@master yaml] # vim test-storageclass.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: stateful-nfs namespace: bdqn-testprovisioner: bdqn-test # here corresponds to the value value in the env environment variable of the third nfs-client-provisioner. ReclaimPolicy: Retain # Recycling policy is: retain, and a default value is "default"

Execute it.

[root@master yaml] # kubectl apply-f test-storageclass.yaml (5) create PVC [root@master yaml] # vim test-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: test-claim namespace: bdqn-testspec: storageClassName: stateful-nfs # define the name of the storage class, which corresponds to the name of SC accessModes:-ReadWriteMany # access mode is RWM resources: requests: storage: 500Mi

Execute it.

[root@master yaml] # kubectl apply-f test-pvc.yaml

Check it out

[root@master yaml] # kubectl get pvc

(6) create a Pod [root@master yaml] # vim test-pod.yamlkind: PodapiVersion: v1metadata: test-pod namespace: bdqn-testspec: containers:-name: test-pod image: busybox args:-/ bin/sh-- c-sleep 30000 volumeMounts:-name: nfs-pvc mountPath: / test restartPolicy: OnFailure volumes:-name: nfs-pvc persistentVolumeClaim: claimName: The name test-claim # should be the same as the name of PVC

Execute it.

[root@master yaml] # kubectl apply-f test-pod.yaml

Check it out

[root@master yaml] # kubectl get pod-n bdqn-test

(7) add content to the container and view the mount directory

Enter the container to modify the content of the page

[root@master yaml] # kubectl exec-it test-pod-n bdqn-test / bin/sh/ # cd test//test # touch test-file/test # echo 123456 > test-file/test # cat test-file 123456

View the mount directory

[root@master yaml] # ls / nfsdata/bdqn-test-test-claim-pvc-79ddfcf1-65ae-455f-9e03-5bcfe6c6ce15web1web2 [root@master yaml] # cat / nfsdata/bdqn-test-test-claim-pvc-79ddfcf1-65ae-455f-9e03-5bcfe6c6ce15/test-file 123456

If there are many similar PV in K8S cluster, PVC will consider not only the name and access control mode when applying for space from PV, but also the size of your application space, which will be allocated to you with the most appropriate size of PV.

Run a web service, using Deployment resources, based on nginx images, replicas for 3. The data persistence directory is the main access directory of the nginx service: / usr/share/nginx/html

Create a PVC to associate with the above resources.

1. PV and pvc based on nfs service

Download the installation package required for nfs

[root@node02 ~] # yum-y install nfs-utils rpcbind

Create a shared directory

[root@master ~] # mkdir / nfsdata

Permissions to create a shared directory

[root@master ~] # vim / etc/exports/nfsdata * (rw,sync,no_root_squash)

Enable nfs and rpcbind

[root@master ~] # systemctl start nfs-server.service [root@master ~] # systemctl start rpcbind

Test it

[root@master] # showmount-e

two。 First create two PV, web- pV1 (1G) and web-pv2 (2G)

Web1

[root@master yaml] # vim web.yaml apiVersion: v1kind: PersistentVolumemetadata: name: web-pvspec: capacity: storage: 1Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: / nfsdata/web1 server: 192.168.1.21

Web2

[root@master yaml] # vim web2.yaml apiVersion: v1kind: PersistentVolumemetadata: name: web-pv2spec: capacity: storage: 2Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: / nfsdata/web2 server: 192.168.1.213. Create the desired folder [root@master yaml] # mkdir / nfsdata/web1 [root@master yaml] # mkdir / nfsdata/web24. Execute web and web2 [root @ master yaml] # kubectl apply-f web.yaml [root@master yaml] # kubectl apply-f web2.yaml 5. Check [root@master yaml] # kubectl get pv

6. Create the yaml file of web's pvc [root@master yaml] # vim web-pvc.yaml apiVersion: v1kind: PersistentVolumeClaimmetadata: name: web-pvcspec: accessModes:-ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nfs

Execute it.

[root@master yaml] # kubectl apply-f web-pvc.yaml

Check it out

[root@master yaml] # kubectl get pvc

The system automatically gives pvc a pv with similar memory, so the one with 1G is chosen.

7. Create the yaml file for pod [root@master yaml] # vim web-pod.yamlapiVersion: Deploymentmetadata: name: web-podspec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers:-image: nginx name: nginx volumeMounts:-name: web-test mountPath: / usr/share/nginx/html volumes: -name: web-test persistentVolumeClaim: claimName: web-pvc execute [root@master yaml] # kubectl apply-f web-pod.yaml check [root@master yaml] # kubectl get pod

8. Visit nginx's web page to see nginx's ip [root@master yaml] # kubectl get pod-o wide

Enter the container to set the web page content root@master yaml] # kubectl exec-it web-pod-8686d9c594-qxhr9 / bin/bashroot@web-pod-8686d9c594-qxhr9:/# cd / usr/share/nginx/html/root@web-pod-8686d9c594-qxhr9:/usr/share/nginx/html# lsroot@web-pod-8686d9c594-qxhr9:/usr/share/nginx/html# echo 123456 > index.htmlroot@web-pod-8686d9c594-qxhr9:/usr/share/nginx/html# exit to visit [ Root@master yaml] # curl 10.244.2.17

If two PV have the same size, the same name, and different access control modes, which PVC will be associated? (the access pattern must be the same when verifying the PV and PVC association)

Two PV, same size and same name Access control mode is different to create two pvweb1 [root@master yaml] # vim web1.yaml apiVersion: v1kind: name: web-pvspec: capacity: storage: 1Gi accessModes:-ReadWriteOnce # can read-write mount to a single node persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: / nfsdata/web1 server: 192.168.1.21web2 [root@master yaml] # vim web2.yaml apiVersion: v1kind: PersistentVolumemetadata: name: web-pvspec: capacity : storage: 1Gi accessModes:-ReadWriteMany # can read-write mount to multiple nodes persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: / nfsdata/web1 server: 192.168.1.21 create the required file [root@master yaml] # mkdir / nfsdata/web1 execute [root@master yaml] # kubectl apply-f web1.yaml [root@master yaml] # kubectl apply-f web2.yaml

Create pvc [root@master yaml] # vim web-pvc.yaml apiVersion: v1kind: PersistentVolumeClaimmetadata: name: web-pvcspec: accessModes:-ReadWriteMany # can read-write mount to multiple nodes resources: requests: storage: 1Gi storageClassName: nfs execute [root@master yaml] # kubectl apply-f web-pvc.yaml to check [root@master yaml] # kubectl get pv

[root@master yaml] # kubectl get pvc

Now you can see that pv and pvc are successfully associated, but why is there only one pv? (the directory mounted by pv should be the same)

That's because when you create two pv with the same name, it doesn't think of them as two different pv, but treats them as the same pv, and the pv created later refreshes the pv created earlier. Then, when the pvc is created, and the access pattern of the pvc is the same as the access pattern that later created the pv, they will be associated successfully and vice versa. (of course, you also need to consider the memory of pv under these conditions.)

(1) create a namespace with your own name. All of the following resources are under this namespace. Write the yam file of namespace [root@master yaml] # vim namespace.yaml kind: NamespaceapiVersion: v1metadata: name: xgp-znb execute [root@master yaml] # kubectl apply-f namespace.yaml to check [root@master yaml] # kubectl get ns

(2) set rbac permissions.

Download the required image

Docker pull registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner writes rbac yam file [root@master yaml] # vim rbac-rolebind.yamlkind: NamespaceapiVersion: v1metadata: name: xgp-znb---apiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner namespace: xgp-znb---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: nfs-provisioner-runner namespace: xgp-znbrules:-apiGroups: ["] resources: ["persistentvolumes"] verbs: ["get" "list", "watch", "create", "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"]-apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get", "create", "list", "watch" "update"]-apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisioner namespace: xgp-znbroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io execute [ Root@master yaml] # kubectl apply-f rbac-rolebind.yaml (3) create nfs-deployment.yaml yam file for writing deployment [root@master yaml] # vim nfs-deployment.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-client-provisioner namespace: xgp-znbspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers:-name : nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner volumeMounts:-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: xgp-znb-name: NFS_SERVER value: 192.168.1.21 -name: NFS_PATH value: / nfsdata volumes:-name: nfs-client-root nfs: server: 192.168.1.21 path: / nfsdata execute [root@master yaml] # kubectl apply-f nfs-deployment.yaml (4) create storageclass to automatically create PV. Write a yam file for storageclass [root@master yaml] # vim storageclass.yamlapiVersion: StorageClassmetadata: name: test-scprovisioner: xgp-znb # associate to the above DeployreclaimPolicy: Retain through the provisioner field [root@master yaml] # kubectl apply-f storageclass.yaml (5) create a yaml file for PVC to write PVC [root@master yaml] # vim pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: test-claim namespace: xgp-znbspec: storageClassName: test-sc accessModes :-ReadWriteMany resources: requests: storage: 500Mi execute [root@master yaml] # kubectl apply-f pvc.yaml check [root@master yaml] # kubectl get pvc-n xgp-znb

(6) create a Pod, run a web service based on nginx, and use the Deployment resource object, replicas=3. Persistent storage directory write the yam file of deployment for the default home directory [root@master yaml] # vim pod.yaml apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: web-pod namespace: xgp-znbspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers:-image: nginx name: nginx volumeMounts:-name: web- Test mountPath: / usr/share/nginx/html volumes:-name: web-test persistentVolumeClaim: claimName: test-claim execute [root@master yaml] # kubectl apply-f pvc.yaml check [root@master yaml] # kubectl get pod-n xgp-znb

(7) visit the nginx page to modify the nginx home page [root@master yaml] # kubectl exec-it web-pod-8cd956cc7-6szjb-n xgp-znb / bin/bash// enter the container root@web-pod-8cd956cc7-6szjb:/# echo xgp-znb > / usr/share/nginx/html/index.html// add custom content host visit [root@master yaml] # curl 10.244.2.18

Five portability suggestions: put your pvc along with a series of other configurations, such as deployment,configmap, do not put your pv in other configurations, because users may not have permission to create pv initialization pvc templates, provide a storageclass in your tool software, watch those pvc that do not have bound, and present to the user cluster to enable DefaultStorageClass when starting, but do not specify a specific type of class Because of the class of different provisioner, it is difficult to agree the parameters of volumn phase1. To bind a PV in PVC, you can select Access Modes according to the following combination of conditions, pvResources according to access mode, resource attribute selection, such as requesting a pvSelector with a storage size of 8 GB, selecting Class according to the label of pv, selecting the name of StorageClass according to the class name of StorageClass, and binding specific types of backend storage 2. Instructions on filtering out pv based on class: all PVC can use some dynamic storage directly without using StorageClass annotations. Just mark a StorageClass object as "default". StorageClass can become the default storage with annotated http://storageclass.beta.kubernetes.io/is-default-class. With the default StorageClass, users do not need storage-class annotations to create PVC, and the new DefaultStorageClass admission controller in 1.4 automatically points this annotation to the default storage class. When PVC specifies a specific storageClassName, such as fast, when storageClassName is specified as "" in the storageClassPVC with the binding name fast, when the pv bound to no class (there is no class annotation in pv, or its value is "") when PVC does not specify storageClassName, whether DefaultStorageClass admission plugin is enabled or not (can be specified when apiserver starts), the resolution behavior of default class is different. When DefaultStorageClass admission plugin is enabled, a default class is assigned to pvc,DefaultStorageClass without storageClass annotation, and this default class needs to be specified by the user, such as adding annotation when creating storageclass objects, such as http://storageclass.beta.kubernetes.io/is-default-class: "true". If there is more than one default class, the pvc will be denied creation, and if the user does not specify a default class, the DefaultStorageClass admission plugin will have no effect. Pvc will use those pv of no class as bindings. When DefaultStorageClass admission plugin is not enabled, the pv of no class is bound for pvc without storageClass annotation (there is no class annotation in pv, or the value is "")

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report