In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Today, I will talk to you about how to explore and learn about Kubernetes static persistent volumes, which may not be well understood by many people. In order to make you understand better, the editor has summarized the following contents for you. I hope you can get something according to this article.
1 noun concept
Volume
Volume is the mount interface of Pod. Lifecycle and Pod can be shared among various Container in Pod. It is mainly used to store temporary data during the life cycle of Pod. Of course, it can also be hung on Host hosts or other back-end storage media to achieve permanent storage. Different storage requirements can be achieved according to the selected Volume Type. Here are the types supported by Volume:
EmptyDir
HostPath
GcePersistentDisk
AwsElasticBlockStore
Nfs
Iscsi
Flocker
Glusterfs
Rbd
Cephfs
GitRepo
Secret
PersistentVolumeClaim
DownwardAPI
AzureFileVolume
AzureDisk
VsphereVolume
Quobyte
Instead of introducing them all here, you can refer to the documentation: Volume Docs.
PersistentVolume (PV)
If there is an independent storage backend, the underlying implementation can be NFS, GlusterFS, Cinder, HostPath, and so on. You can use PV to allocate some resources for the storage needs of kubernetes. Its life cycle does not depend on Pod and is an independent virtual storage space, but it cannot be directly mounted by the Volume of Pod. PVC is needed at this time.
Back-end storage plug-ins supported by PV:
GCEPersistentDisk
AWSElasticBlockStore
AzureFile
AzureDisk
FC (Fibre Channel)
NFS
ISCSI
RBD (Ceph Block Device)
CephFS
Cinder (OpenStack block storage)
Glusterfs
VsphereVolume
HostPath (single node testing only-local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
PersistentVolumeClaim (PVC)
Pod uses PV resources through PVC, and PVC can be understood as a resource usage request. A Pod needs to specify the size and access method of the resources used, create a PersistentVolume Controller that the PVC application is submitted to the kubernetes, and then schedule the appropriate PV to bind with the PVC. Then the Volume in the Pod can use the PV resources through PVC.
StorageClasse
Used to define dynamic PV resource scheduling, compared with static PV resources, dynamic PV does not need to create PV in advance, but through PersistentVolume Controller dynamic scheduling, according to the resource request of PVC, find the underlying storage defined by StorageClasse to allocate resources.
2 create PersistentVolume
Define PersistentVolume, where hostPath is used as the underlying storage layer.
Kind: PersistentVolumeapiVersion: v1metadata:name: pv001labels:release: stablespec:capacity:storage: 5GiaccessModesvirtual-ReadWriteOncepersistentVolumeReclaimPolicy: RecyclehostPath:path: / tmp/data
You can also use NFS or other plug-ins as the underlying storage layer, and you need to prepare NFS Server in advance:
YamlapiVersion: v1kind: PersistentVolumemetadata:name: pv002labels:release: stablespec:capacity: storage: 5GiaccessModes:-ReadWriteOncepersistentVolumeReclaimPolicy: Recyclenfs: path: / tmp/data server: 172.17.0.2
Capacity
It is used to define the storage capacity of PV. Currently, only the defined size is supported. Other capabilities such as IOPS and throughput will be implemented in the future.
AccessModes
It is used to define the access methods of resources, which are limited by the underlying support of the storage. The access methods include the following:
ReadWriteOnce-read-write rw mode by a single node mount
ReadOnlyMany-mount by multiple nodes to read-only ro mode
ReadWriteMany-mount by multiple nodes to read-write rw mode
The access methods of storage plug-ins supported by k8s are listed on the way below:
PersistentVolumeReclaimPolicy
It is used to define the recycling method of resources, and first of all, with the underlying support of storage, the existing recycling strategy:
Retain-manual recycling
Recycle-delete data ("rm-rf / thevolume/*")
"Delete? deletes the volume through the storage backend, which stores such as AWS EBS, GCE PD, or Cinder."
Currently, only NFS and HostPath support Recycle policy, while AWS EBS, GCE PD, Azure Disk, and Cinder support Delete policy.
Note: the Recycle policy executes the data deletion command by running a busybox container. The default defined busybox image is: gcr.io/google_containers/busybox:latest, and imagePullPolicy: Always. If you need to adjust the configuration Need to add kube-controller-manager startup parameters:-- pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/manifests/recycler.ymlYamlapiVersion: v1kind: Podmetadata:name: pv-recycler-namespace: defaultspec:restartPolicy: Nevervolumes:- name: volhostPath: path: [Path of Persistent Volume hosted] containers:- name: pv-recyclerimage: "gcr.io/google_containers/busybox" imagePullPolicy: IfNotPresentcommand: ["/ bin/sh", "- c" "test-e / scrub & & rm-rf / scrub/..?* / scrub/. [!.] * / scrub/* & & test-z\" $(ls-A / scrub)\ "| | exit 1"] volumeMounts:- name: vol mountPath: / scrub3 create PersistentVolumeClaim
Define PersistentVolumeClaim.
Yamlkind: PersistentVolumeClaimapiVersion: v1metadata:name: myclaim-1spec:accessModes:- ReadWriteOnceresources:requests: storage: 5Giselector:matchLabels: release: stable
AccessModes
Consistent with the access method of PersistentVolume, the PersistentVolume Controller schedule access method is consistent. PV resources are bound to PVC.
Resources
It is used to define the size of the storage resource used by the application, which is applicable to the resource model of kubernetes. For more information, please see Resource Model docs.
Selector
Define PVC application filter PV volume set, and use it with label definition, which is consistent with other selector concepts in kubernetes, but slightly different in usage, adding matching options:
MatchLabels-match label, volume label must match a value
MatchExpressions-matching expression consisting of key-value pairs, operators including In,NotIn,Exists, and DoesNotExist.
In addition, there is a volume.beta.kubernetes.io/storage-class definition. Only PV and PVC with the same definition will be bound. For more information, please see PersistentVolume docs.
4 Mount Volume to Pod
After PV and PVC are created and bound, they look something like this:
BashNAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGEpv/pv001 5Gi RWO Recycle Bound default/myclaim-1 11mNAME STATUS VOLUME CAPACITY ACCESSMODES AGEpvc/myclaim-1 Bound pv001 5Gi RWO 3s
PersistentVolume has four states:
Available-available statu
Bound-bind to PVC
Released-PVC has been deleted but has not been recycled
Failed-automatic collection failed
Mount the created PVC:myclaim-1 to the Pod:
Bashkind: PodapiVersion: v1metadata:name: mypodspec:containers:- name: myfrontend image: dockerfile/nginx volumeMounts:-mountPath: "/ var/www/html" name: mypdvolumes:- name: mypd persistentVolumeClaim: claimName: myclaim-1
After the mount is successful, / tmp/data is automatically created on the Host where Pod resides to store data. HostPath Volume is convenient for testing and debugging, but it is only suitable for single-node environment. In multi-node environment, if the Pod is drifted or rebuilt, the original data cannot be accessed.
After reading the above, do you have any further understanding of how to explore and learn Kubernetes static persistent volumes? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.