Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Kubernetes persistent volumes

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article focuses on "how to use Kubernetes persistent volumes". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to use Kubernetes persistent volumes.

Volume Volum

Files in Container are temporarily stored on disk, which poses some problems for the more important applications running in Container:

1. When the container crashes, kubelet restarts the container, but the container restarts in a clean state, resulting in file loss.

When running multiple containers in 2.Pod, you want to be able to share files in multiple containers.

So Kubernetes uses the abstract concept of volume (Volume) to solve these two problems. Kubernetes supports the following types of volumes:

Hostpath: Mount files or directories on the host node file system to your Pod.

EmptyDir: when Pod is dispatched to a Node, the emptyDir volume is created, and the volume exists while Pod is running on that node. As its name suggests, the volume is initially empty. Multiple containers in Pod can share files in emptyDir volumes. When Pod is deleted from the node for some reason, the data in the emptyDir volume is also permanently deleted. A container crash does not cause the Pod to be removed from the node, so the data in the emptyDir volume is secure during the container crash.

PersistentVolume: persistentVolumeClaim volumes are used to mount persistent volumes (PersistentVolume) into Pod. Persistent volume claim (PersistentVolumeClaim) is a way for users to "claim" persistent storage (such as NFS,iSCSI) without knowing the details of a particular cloud environment.

Persistent Volume persistent Volume

This article mainly introduces the use of persistent volumes. To enable developers to avoid dealing with storage facility details when requesting storage resources, Kubernetes introduces persistent volumes (PersistentVolume,PV) and persistent volume claims (PersistentVolumeClaim,PVC):

A persistent volume (PersistentVolume,PV is a piece of storage in a cluster that can be provisioned in advance by an administrator or dynamically provisioned using a storage class (Storage Class). Persistent volumes are cluster resources, just as nodes are cluster resources. PV persistent volumes, like normal Volume, are implemented using volume plug-ins, except that they have a lifecycle independent of any Pod that uses PV. The implementation details of storage are described in this API object, regardless of whether it is behind NFS, iSCSI, or a cloud-specific storage system.

Persistent volume claim (PersistentVolumeClaim,PVC represents the user's request for storage. Similar in concept to Pod. Pod consumes node resources, while PVC claims consume PV resources. Pod can request a specific amount of resources (CPU and memory); similarly, PVC claims can request specific sizes and access modes (for example, you can require PV volumes to be mounted in one of ReadWriteOnce, ReadOnlyMany, or ReadWriteMany modes).

There are two ways to create a PV:

Static provisioning: one is the PV required by the cluster administrator to create the application statically by manual means.

Dynamic supply:

Method 1: the user manually creates the PVC and the corresponding PV is dynamically created by the Provisioner component.

Method 2: use the volumeClaimTemplates declaration when creating the Pod.

PV recovery strategy

The retention (Retain) retention policy allows manual recycling of resources. When a PVC is deleted, the PV still exists and becomes Realease, which requires the user to manually recycle the volume through the following steps (only hostPath and nfs support Retain recycling policy):

1. Delete the PV.

two。 Manually clean up stored data resources.

Resycle this policy is obsolete and dynamic provisioning is recommended. The recycling policy performs a basic erase (rm-rf / thevolume/*) on volume and can be declared for use again.

Delete (Delete)

When a delete operation occurs, the PV object is deleted from the Kubernetes cluster and the deletion of external storage resources is performed (some are renamed rather than deleted, depending on the delete logic defined by different provisioner).

Dynamically configured volumes inherit their StorageClass's recycling policy, which defaults to Delete, that is, when a user deletes a PVC, the PV's deletion policy is automatically executed.

Access mode

The access modes are:

ReadWriteOnce-Volume can be mounted by a node in read / write mode

ReadOnlyMany-Volume can be mounted read-only by multiple nodes

ReadWriteMany-volumes can be mounted read-write by multiple nodes.

In the command line interface (CLI), the access mode also uses the following abbreviations:

RWO-ReadWriteOnce

ROX-ReadOnlyMany

RWX-ReadWriteMany

Each volume can only be mounted in one access mode at a time, even if the volume can support multiple access modes. For example, a GCEPersistentDisk volume can be mounted in ReadWriteOnce mode by one node, or by multiple nodes in ReadOnlyMany mode, but not in both modes.

Volume binding mode

The volumeBindingMode field controls when PVC and PV are bound.

Immediate: indicates that volume binding and dynamic preparation are completed once the PVC is created. For storage backends that are not accessible to all nodes in the cluster due to topology constraints, PV will bind or prepare without knowing the Pod scheduling requirements.

WaitForFirstConsumer: this mode delays the binding and preparation of the PV until the Pod using the PVC is created. PV selects or prepares based on the topology specified by Pod scheduling constraints, including, but not limited to, resource requirements, node filters, pod affinity and mutual exclusion, and stains and tolerance.

Static supply

Static provisioning requires the administrator to manually create the PV, then create the PVC binding PV, and finally create the Pod declaration using PVC.

Create PVapiVersion: v1kind: PersistentVolumemetadata: name: task-pv-volume labels: type: localspec: storageClassName: manual # static supply You can take any name capacity: storage: 10Gi accessModes:-ReadWriteOnce hostPath: path: "/ mnt/data" # create this directory on the node where the pod is created, create PVCfapiVersion: v1kind: PersistentVolumeClaimmetadata: name: task-pv-claimspec: storageClassName: manual # storageClassName to be consistent with accessModes:-ReadWriteOnce # accessMode in PV and resources: requests: storage: 3Gi # in PV to apply for 3G capacity and apply for proximity principle If there is a 10G and a 20G PV to meet the requirements Then use 10G PV to create Pod mount PVapiVersion: v1kind: Podmetadata: name: task-pv-podspec: volumes:-name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim # PVC name containers:-name: task-pv-container image: nginx ports:-containerPort: 80 name: "http-server" volumeMounts:-mountPath: " / usr/share/nginx/html "# directory mounted in the container name: task-pv-storage

Create a file in the directory on the host:

Root@worker01:~# cd / mnt/data/root@worker01:/mnt/data# echo "volume nginx" > index.html

When you try to access Pod's service, you can see that the index.html file of nginx has been modified:

Root@master01:~/yaml/volume# kubectl get pod-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStask-pv-pod 1bat 1 Running 0 11m 192.168.5.17 worker01 root@master01:~/yaml/volume# curl 192.168.5.17volume nginx

The deletion should be in the order of Pod-- > PVC-- > PV. If the PVC is deleted first, the PVC will not be deleted until the Pod is deleted. If the PV is deleted first, the PV will not be deleted until pod and PVC are deleted.

Dynamic supply

Dynamic volume provisioning allows storage volumes to be created on demand. If there is no dynamic provisioning, cluster administrators must manually contact their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects in the Kubernetes cluster to represent those volumes. Dynamic provisioning eliminates the need for cluster administrators to pre-configure storage. Instead, it automatically supplies storage when requested by the user.

Install NFS install NFS server root@master01:/# apt-get-y install nfs-kernel-server root@master01:/# systemctl enable nfs-kernel-server.service & & systemctl restart nfs-kernel-server.service install NFS client

Install the NFS client on the worker node:

Root@worker01:~# apt-get-y install nfs-common configuration storage plug-in configuration RBACapiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create" "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"]-apiGroups: [""] resources: ["events"] verbs: ["create", "update" "patch"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-client-provisionersubjects:-kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is Deployed namespace: defaultrules:-apiGroups: [""] resources: ["endpoints"] verbs: ["get" "list", "watch", "create", "update", "patch"]-kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultsubjects:-kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io deploy storage plug-in

The implementation of dynamic volume provisioning is based on the StorageClass API object in the storage.k8s.io API group. Cluster administrators can define multiple StorageClass objects as needed, and each object specifies a storage plug-in (also known as provisioner). The storage plug-in exists in the Kubernetes cluster in the form of Pod:

ApiVersion: apps/v1kind: Deploymentmetadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultspec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers:-name: nfs-client-provisioner Image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts:-name: nfs-client-root mountPath: / persistentvolumes env: # specifies the value that identifies the plug-in-name: PROVISIONER_NAME value: fuseim.pri/ifs # matches StorageClass's provisioner-name: NFS_SERVER Value: 10.0.1.31 # ip address of the NFS server-path of the name: NFS_PATH value: / storage # NFS server volumes:-name: nfs-client-root nfs: server: 10.0.1.31 # ip address of the NFS server path: / storage # NFS server Method 1: create PVC, automatically apply for PV, configure StorageClass

StorageClass declares the storage plug-in, which is used to automatically create PV,provisioner parameters and the identification of the storage plug-in in order to dynamically supply volumes:

ApiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: managed-nfs-storageprovisioner: fuseim.pri/ifs # to match the value of nfs deployment env PROVISIONER_NAME, nfs storage is not supported by default. Add plug-in ID parameters: archiveOnDelete: "false" to create PVCapiVersion: v1kind: PersistentVolumeClaimmetadata: name: nginx-pv-storagespec: accessModes:-ReadWriteMany storageClassName: managed-nfs-storage resources: requests: storage: 1Gi

Looking at the PVC and PV created, we can see that only the PVC,PV is automatically configured by the storage plug-in

Root@master01:~/yaml/storageClass# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEnginx-pv-storage Bound pvc-e52ac960-182a-4065-a6e8-6957f5c93b8a 1Gi RWX managed-nfs-storage 3sroot@master01:~/yaml/storageClass# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-e52ac960-182a-4065-a6e8-6957f5c93b8a 1Gi RWX Delete Bound default/nginx-test managed-nfs-storage 11s

Pod uses PVC to apply for Volume:

ApiVersion: v1kind: Podmetadata: name: nginx-pv-podspec: volumes:-name: nginx-pv-storage persistentVolumeClaim: claimName: nginx-test containers:-name: nginx-pv-container image: nginx ports:-containerPort: 80 name: "nginx-server" volumeMounts:-mountPath: "/ usr/share/nginx/html" name: nginx-pv-storage method 2: volumeClaimTemplates

In addition to the way that PVC is created above to automatically create PV, and then Pod declares to use PVC, an easier method is to use volumeClaimTemplates to directly specify the size of StorageClass and application storage, and dynamically create PVC and PV:

ApiVersion: apps/v1kind: StatefulSetmetadata: name: webspec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 2 template: metadata: labels: app: nginx spec: containers:-name: nginx image: nginx ports:-containerPort: 80 name: web volumeMounts:-name: nginx-disk-ssd mountPath: / Data volumeClaimTemplates:-metadata: name: nginx-disk-ssd spec: accessModes: ["ReadWriteOnce"] storageClassName: "managed-nfs-storage" # storageClass name resources: requests: storage: 10Gi so far I believe you have a deeper understanding of "how to use Kubernetes persistent volumes", so you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report