Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes Storage Volume Management PV&PVC (10)

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

To persist the container's data, you can use Kubernetes Volume.

The life cycle of Volume is independent of containers. Containers in Pod may be destroyed and rebuilt, but Volume will be retained.

In essence, Kubernetes Volume is a directory, similar to Docker Volume. When the Volume is mount to all containers in the Pod,Pod, the Volume can be accessed. Kubernetes Volume also supports a variety of backend types, including emptyDir, hostPath, GCE Persistent Disk, AWS Elastic Block Store, NFS, Ceph, etc. For a complete list, please see https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes

1. EmptyDir

EmptyDir is the most basic Volume type. As its name suggests, an emptyDir Volume is an empty directory on Host.

EmptyDir Volume is persistent for containers, but not for Pod. When the Pod is deleted from the node, the contents of the Volume are also deleted. However, if only the container is destroyed and the Pod is still there, the Volume is not affected.

In other words: the life cycle of emptyDir Volume is the same as that of Pod.

All containers in Pod can share Volume, and they can specify their own mount paths. Here is an example to practice emptyDir. The configuration file is as follows:

ApiVersion: v1kind: Podmetadata: name: producer-consumerspec: containers:-name: producer image: busybox volumeMounts:-name: shared-volume mountPath: / producer_dir args:-/ bin/sh-- c-echo "hello world" > / producer_dir/hello Sleep 30000-name: consumer image: busybox volumeMounts:-name: shared-volume mountPath: / consumer_dir args:-/ bin/sh-- c-cat / consumer_dir/hello; sleep 30000 volumes:-name: shared-volume emptyDir: {}

Here we simulate a producer-consumer scenario. Pod has two containers, producer and consumer, which share a Volume. Producer is responsible for writing data to Volume, while consumer reads data from Volume.

The bottom volumes of the file defines a Volume shared-volume of type emptyDir. The producer container will shared-volume mount to the / producer_dir directory. Producer writes the data to the file hello through echo. The consumer container will shared-volume mount to the / consumer_dir directory. Consumer reads data from the file hello through cat.

Create the Pod by executing the following command:

[root@master ~] # kubectl apply-f emptydir.yaml pod/producer-consumer created [root@master ~] # kubectl get podsNAME READY STATUS RESTARTS AGEproducer-consumer 2 Running 0 8s [root@master ~] # kubectl logs producer-consumer consumerhello world

Kubectl logs shows that container consumer successfully reads the data written by producer, verifying that the two containers share emptyDir Volume.

EmptyDir is a temporary directory created on Host, and its advantage is that it can easily provide shared storage for containers in Pod without additional configuration. But it doesn't have persistence, and if Pod doesn't exist, emptyDir will be gone. Based on this feature, emptyDir is particularly suitable for scenarios where containers in Pod need temporary shared storage space, such as the previous producer-consumer use case.

II. HostPath

The role of hostPath Volume is to mount a directory that already exists in the Docker Host file system to the container of Pod. Most applications will not use hostPath Volume because it actually increases the coupling between Pod and nodes and limits the use of Pod. However, applications that need access to Kubernetes or Docker internal data (configuration files and binary libraries) need to use hostPath.

In the following example, we mount the directory / data/pod/v1 on the host to the / usr/share/nginx/html/ of the container on Pod.

ApiVersion: v1kind: Podmetadata: name: pod-vol-hostPathspec: containers:-name: mytest image: wangzan18/mytest:v1 volumeMounts:-name: html mountPath: / usr/share/nginx/html/ volumes:-name: html hostPath: path: / data/pod/v1 type: DirectoryOrCreate

If the Pod is destroyed, the directory corresponding to the hostPath will also be retained, so from this point of view, hostPath is more persistent than emptyDir. But once Host crashes, hostPath becomes inaccessible.

3. PV & PVC

Volume provides a very good data persistence solution, but there are still deficiencies in manageability.

Taking the previous example of AWS EBS, to use Volume,Pod, you must know the following information in advance:

The current Volume is from AWS EBS. The EBS Volume has been created ahead of time and knows the exact volume-id.

Pod is usually maintained by the application developer, while Volume is usually maintained by the storage system administrator. Developers need to get the above information:

Or ask the administrator. Or you are the administrator yourself.

This creates a management problem: the responsibilities of application developers and system administrators are coupled. It is acceptable if the system is small or for a development environment. However, when the size of the cluster becomes larger, especially for the generation environment, considering efficiency and security, this has become a problem that must be solved.

The solutions given by Kubernetes are PersistentVolume and PersistentVolumeClaim.

PersistentVolume (PV) is a piece of storage space in an external storage system that is created and maintained by an administrator. Like Volume, PV is persistent and its life cycle is independent of Pod.

PersistentVolumeClaim (PVC) is an application for PV (Claim). PVC is usually created and maintained by ordinary users. When you need to allocate storage resources for Pod, users can create a PVC that indicates the capacity of the storage resources and information such as access mode (such as read-only). Kubernetes will find and provide PV that meets the criteria.

With PersistentVolumeClaim, users only need to tell Kubernetes what kind of storage resources they need, regardless of the underlying details such as where the real space is allocated and how to access it. The underlying information of the Storage Provider is left to the administrator, and only the administrator should care about the details of creating the PersistentVolume.

Kubernetes supports many types of PersistentVolume, such as AWS EBS, Ceph, NFS, etc. For a complete list, please see https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes

In the next section, we will use NFS to understand the use of PersistentVolume.

1 、 NFS PersistentVolume

As preparation, we have set up a NFS server on the node3 node and output three directories:

[root@datanode03] # showmount-e localhostExport list for localhost:/data/volumes/pv003 192.168.1.0/24/data/volumes/pv002 192.168.1.0/24/data/volumes/pv001 192.168.1.0 Universe 24

Create the PV below, and the configuration file nfs-pv.yaml is as follows:

ApiVersion: v1kind: PersistentVolumemetadata: name: pv001spec: capacity: storage:-ReadWriteMany nfs: path: / data/volumes/pv001 server: 192.168.1.203---apiVersion: v1kind: PersistentVolumemetadata: name: pv002spec: capacity: storage: 5Gi accessModes:-ReadWriteMany nfs: path: / data/volumes/pv002 server: 192.168.1.203---apiVersion: v1kind: PersistentVolumemetadata: name: pv003spec: capacity: storage: 10Gi accessModes :-ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: / data/volumes/pv003 server: 192.168.1.203

AccessModes specifies the access mode as ReadWriteOnce, and the supported access modes are:

ReadWriteOnce-PV can mount to a single node in read-write mode. ReadOnlyMany-PV can mount to multiple nodes in read-only mode. ReadWriteMany-PV can mount to multiple nodes in read-write mode.

PersistentVolumeReclaimPolicy specifies that when the recycling policy of PV is Recycle, the supported policies are:

Retain-requires manual recycling by the administrator. Recycle-clears the data in PV, which is equivalent to performing rm-rf / thevolume/*. Delete? deletes the corresponding storage resources on the Storage Provider, such as AWS EBS, GCE PD, Azure Disk, OpenStack Cinder Volume, and so on.

Create the PV we just made.

[root@master] # kubectl apply-f nfs-pv.yaml persistentvolume/pv001 createdpersistentvolume/pv002 createdpersistentvolume/pv003 createdNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv001 1Gi RWX Retain Available 6spv002 5Gi RWX Retain Available 6spv003 10Gi RWX Recycle Available 6s2 、 Create PVC

Create a configuration file, nfs-pvc.yaml, and modify it as follows:

ApiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc001spec: accessModes:-ReadWriteMany resources: requests: storage: 1Gi

Create a PVC.

[root@master] # kubectl apply-f nfs-pvc.yaml persistentvolumeclaim/pvc001 created [root@master ~] # kubectl get pvc [root@master ~] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvc001 Bound pv001 1Gi RWX 9s [root@master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv001 1Gi RWX Retain Bound default/pvc001 2m7spv002 5Gi RWX Retain Available 2m7spv003 10Gi RWX Recycle Available 2m7s3 、 Create a Pod to mount

Create a pod configuration file, nfs-pvc-pod.yaml, as follows:

ApiVersion: v1kind: Podmetadata: name: mypod1spec: containers:-name: mypod1 image: busybox args:-/ bin/sh-- c-sleep 30000 volumeMounts:-name: mydata mountPath: "/ mydata" volumes:-name: mydata persistentVolumeClaim: claimName: pvc001

Create a pod.

[root@master] # kubectl apply-f nfs-pvc-pod.yaml pod/mypod1 created [root@master ~] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmypod1 1 node01 1 Running 0 17m 10.244.1.16

Verify that PV is available:

[root@master ~] # kubectl exec mypod1 touch / mydata/hello

Then check it on the node3 server.

[root@node03 pv001] # ll-rw-r--r-- 1 root root 0 December 13 19:39 hello

As you can see, the file / mydata/hello created in Pod has indeed been saved to the NFS server directory / data/volumes/pv001.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report