In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article is to share with you about the use of Persistent Volumes in Kubernetes storage. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Brief introduction
There is a significant difference between managing storage and managing computing. The PersistentVolume subsystem provides users and administrators with a set of API to abstract the details of how storage is provided and consumed. Here, we introduce two new API resources: PersistentVolume (PV for short) and PersistentVolumeClaim (PVC for short).
PersistentVolume (persistent Volume, or PV for short) is part of the network storage provided by the administrator in the cluster. Like nodes in a cluster, PV is a resource in a cluster. Like Volume, it is a volume plug-in, but its life cycle is independent of the Pod that uses it. PV, the API object, captures the implementation details of cloud storage systems such as NFS, ISCSI, or other cloud storage systems.
PersistentVolumeClaim (persistent volume declaration, or PVC for short) is a storage request made by the user. It is similar to Pod, where Pod consumes Node resources and PVC consumes PV resources. Pod can request specific resources (such as CPU and memory). PVC can request a specified size and access mode (which can be mapped to read-write or multiple read-only).
PVC allows users to consume abstract storage resources, and users often need PV for various attributes, such as performance. Cluster administrators need to provide a variety of PV with different sizes and access modes without exposing the details of how these volume are implemented to users. Because of this demand, a kind of StorageClass resource is born.
StorageClass provides a way for an administrator to describe the level of storage he provides. Cluster administrators can map different levels to different service levels and different back-end policies.
The lifecycle of volume and claim
PV is the resource in the cluster, and PVC is the request for these resources, and it is also the "extraction certificate" of these resources. The interaction between PV and PVC follows the following lifecycle:
Supply
There are two ways that PV provides: static and dynamic.
Static state
The cluster administrator creates multiple PV that carry details of the real storage that is available to cluster users. They exist in Kubernetes API and can be used for storage.
Dynamic
When none of the static PV created by the administrator matches the user's PVC, the cluster may try to specifically feed the volume to the PVC. This provisioning is based on the fact that StorageClass:PVC must request such a level, and the administrator must have created and configured such a level in case this dynamic provisioning occurs. PVC, with a request level of "", effectively disables its own dynamic provisioning function.
Binding
The user creates a PVC (or has previously been created for dynamic provisioning), specifying the required storage size and access mode. There is a control loop in master that monitors the new PVC, finds a matching PV, if any, and binds the PVC to the PV. If a PV has been dynamically fed to a new PVC, the loop will always bind the PV to the PVC. In addition, users always get at least the storage they require, but the volume may exceed their requests. Once bound, PVC bindings are exclusive, regardless of their binding mode.
If no matching PV is found, the PVC will be left in the unbound unbound state indefinitely, and once the PV is available, the PVC will become bound again. For example, if a PV cluster provides a lot of 50G, it will not match the PVC that requires 100G. The PVC will not be bound until 100G PV is added to the cluster.
Use
Pod uses PVC just like volume. The cluster checks the PVC, looks for the bound PV, and maps PV to Pod. For PV that supports multiple access modes, users can specify which mode they want to use.
Once the user has a PVC and the PVC is bound, the PV always belongs to that user as long as the user needs it. The user schedules Pod and accesses PV by including PVC in the volume block of Pod.
Release
When users have finished using PV, they can delete the PVC object through API. When the PVC is deleted, the corresponding PV is considered to be "released", but it cannot be used by another PVC. The ownership of the previous PVC still exists in the PV and must be disposed of according to policy.
Recovery
PV's recycling strategy tells the cluster what the cluster should do with the PV after it has been released. Currently, PV can be Retained (reserved), Recycled (reuse), or Deleted (deleted). Reservation allows resources to be manually re-declared. For PV volumes that support the delete operation, the delete operation removes the PV object from the Kubernetes, along with the corresponding external storage (such as AWS EBS,GCE PD,Azure Disk, or Cinder volume). Dynamically provisioned volumes are always deleted.
Recycled (reuse)
If the PV volume supports reuse, reuse performs a basic erase operation (rm-rf / thevolume/*) on the PV volume so that it can be utilized again by other PVC claims.
Administrators can configure custom reuse Pod templates through Kubernetes controller manager's command line tool (click View). The custom reuse Pod template must contain the details of the PV volume, as shown in the following example:
ApiVersion: v1kind: Podmetadata: name: pv-recycler- namespace: defaultspec: restartPolicy: Never volumes:-name: vol hostPath: path: / any/path/it/will/be/replaced containers:-name: pv-recycler image: "gcr.io/google_containers/busybox" command: ["/ bin/sh", "- c" "test-e / scrub & & rm-rf / scrub/..?* / scrub/. [!.] * / scrub/* & & test-z\" $(ls-A / scrub)\ "| | exit 1"] volumeMounts:-name: vol mountPath: / scrub
As above, the specified path in the volumes section should be replaced with the path that the PV volume needs to reuse.
PV Typ
PV types are implemented in the form of plug-ins. Kubernetes now supports the following plug-ins:
GCEPersistentDisk
AWSElasticBlockStore
AzureFile
AzureDisk
FC (Fibre Channel)
Flocker
NFS
ISCSI
RBD (Ceph Block Device)
CephFS
Cinder (OpenStack block storage)
Glusterfs
VsphereVolume
Quobyte Volumes
HostPath (only tested in the case of a single node-does not support any form of local storage, does not work in a multi-node cluster)
VMware Photon
Portworx Volumes
ScaleIO Volumes
PV introduction
Each PV contains a spec and status, that is, the specification and the status of the PV volume.
ApiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 5Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: slow nfs: path: / tmp server: 172.17.0.2Capacity (capacity)
In general, PV specifies the capacity of storage, which is set using the capacity property of PV. Check Kubernetes's Resource Model to learn about capacity.
Currently, the storage size is the only resource that can be set or requested. The future may include attributes such as IOPS, throughput and so on.
Access mode
PV can be mapped to host using any method supported by the storage resource provider. As shown in the table below, the provider has different functions, and the access mode of each PV is set to the specified mode supported by the volume. For example, NFS can support multiple read / write clients, but you can specify a read-only NFS PV on the server. Each PV has its own access mode.
Access modes include:
▷ ReadWriteOnce-this volume can only be mapped by a single node in a read-write manner
▷ ReadOnlyMany-this volume can be mapped read-only by multiple nodes
▷ ReadWriteMany-this volume can only be mapped by multiple nodes in a read-write manner
In CLI, the access mode can be abbreviated as:
▷ RWO-ReadWriteOnce
▷ ROX-ReadOnlyMany
▷ RWX-ReadWriteMany
Note: even though volume supports many access modes, it can only map in one way at the same time. For example, GCEPersistentDisk can be mapped to ReadWriteOnce by a single node, or to ReadOnlyMany by multiple nodes, but not both.
Volume PluginReadWriteOnceReadOnlyManyReadWriteManyAWSElasticBlockStore ✓-- AzureFile ✓✓✓ AzureDisk ✓-- CephFS ✓✓✓ Cinder ✓-- FC ✓✓-FlexVolume ✓✓-Flocker ✓-- GCEPersistentDisk ✓✓-Glusterfs ✓✓✓ HostPath ✓-- iSCSI ✓✓-PhotonPersistentDisk ✓-- Quobyte ✓✓✓ NFS ✓✓✓ RBD ✓✓-VsphereVolume ✓-- VsphereVolume ✓-✓ PortworxVolume # PortworxVolume-ScaleIO
A PV can have a class that selects the specified StorageClass by setting the storageClassName property. A PV with the specified class can only be bound to the PVC requesting the class. A PV with no storageClassName property set can only be bound to a PVC that does not request a class.
In the past, volume.beta.kubernetes.io/storage-class annotations were used instead of the storageClassName attribute. This annotation still works, but has been completely deprecated in future versions of Kubernetes.
Recycling strategy
The current recycling strategies are:
▷ Retain: manual recycling
▷ Recycle: you need to wipe it out before you can use it again.
▷ Delete: associated storage assets, such as AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder volumes, are deleted
Currently, only NFS and HostPath support recycling, and AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder volumes support delete operations.
Stage
An volume volume is in one of the following stages:
▷ Available: free resource, not bound to PVC
▷ Bound: bound to a PVC
▷ Released:PVC has been deleted, but PV has not been reclaimed by the cluster
▷ Failed:PV failed in automatic recycling
CLI can display the PVC name of the PV binding.
Mapping option
When PV is mapped to a node, the Kubernetes administrator can specify additional mapping options. You can specify mapping options for PV by using the annotation volume.beta.kubernetes.io/mount-options.
For example:
ApiVersion: "v1" kind: "PersistentVolume" metadata: name: gce-disk-1 annotations: volume.beta.kubernetes.io/mount-options: "discard" spec: capacity: storage: "10Gi" accessModes:-"ReadWriteOnce" gcePersistentDisk: fsType: "ext4" pdName: "gce-disk-1
The mapping option is a string that can be incrementally added and used when PV is mapped to disk.
Note that not all PV types support mapping options. In Kubernetes v1.6, the following PV types support mapping options.
● GCEPersistentDisk
● AWSElasticBlockStore
● AzureFile
● AzureDisk
● NFS
● iSCSI
● RBD (Ceph Block Device)
● CephFS
● Cinder (OpenStack block storage)
● Glusterfs
● VsphereVolume
● Quobyte Volumes
● VMware Photon
PersistentVolumeClaims (PVC)
Each PVC contains a spec and status, that is, the rule description and status of the PVC.
Kind: PersistentVolumeClaimapiVersion: v1metadata: name: myclaimspec: accessModes:-ReadWriteOnce resources: requests: storage: 8Gi storageClassName: slow selector: matchLabels: release: "stable" matchExpressions:-{key: environment, operator: In, values: [dev]} access mode
When requesting storage that specifies an access mode, PVC uses the same rules as PV.
Resources
PVC, like pod, can request a specified number of resources. Both PV and PVC use the same resource style when requesting resources.
Selector (Selector)
PVC can specify a tag selector to filter the PV more deeply, and only PV that matches the selector tag can be bound to the PVC. The selector contains two fields:
● matchLabels (matching tag)-PV must have one that contains the worthy tag
● matchExpressions-A list of requests containing specified keys, a list of values, associated keys, and operators for values. Legal operators include In,NotIn,Exists, and DoesNotExist.
All requests from matchLabels and matchExpressions are logical and relational, and they must all be satisfied in order to match.
Rating (Class)
PVC can request the specified level by using the property storageClassName to specify the name of the StorageClass. Only PV that satisfies the request level, that is, PV that contains the same storageClassName as PVC, can be bound to PVC.
PVC does not have to request a level. A PVC with storageClassName set to "" is always understood as requesting an unhierarchical PV, so it can only be bound to a hierarchical PV (no corresponding annotation is set, or set to ""). The PVC without storageClassName is different. Whether the permission plug-in of DefaultStorageClass is turned on or not, the cluster will deal with PVC differently.
If the permissions plug-in is opened, the administrator may specify a default StorageClass. All PVC without a specified StorageClassName can only be bound to the default level of PV. To specify the default StorageClass, you need to set the dimension storageclass.kubernetes.io/is-default-class to "true" in the StorageClass object. If the administrator does not specify this default value, the cluster responds to the PVC creation request as if the permissions plug-in was turned off. If multiple default levels are specified, the permissions plug-in prohibits PVC from creating requests.
If the permission plug-in is turned off, there is no concept of default StorageClass for a long time. All PVC without StorageClassName can only be bound to PV without hierarchy. Therefore, a PVC without setting StorageClassName is treated like a PVC with StorageClassName set to "".
Depending on the installation method, the default StorageClass may be managed by the plug-in and deployed in the Kubernetes cluster by default during installation.
When PVC specifies selector to request StorageClass, all requests are related to the operation. Only PV that meets the specified level and label can be bound to PVC. Currently, a PVC with a non-empty selector cannot be dynamically provisioned using PV.
In the past, volume.beta.kubernetes.io/storage-class annotations were used instead of the storageClassName attribute. This annotation still works, but has been completely deprecated in future versions of Kubernetes.
Use PVC
Pod accesses storage by using PVC, which is used in the same way as volume. PVC must be in the same namespace as the pod that uses it. The cluster discovers the PVC of the pod namespace, obtains its back-end PV according to PVC, and then PV is mapped to host and provided to pod.
Kind: PodapiVersion: v1metadata: name: mypodspec: containers:-name: myfrontend image: dockerfile/nginx volumeMounts:-mountPath: "/ var/www/html" name: mypd volumes:-name: mypd persistentVolumeClaim: claimName: myclaim Namespace considerations
PV bindings are unique because PVC is a namespace object, and you can only use multiple schemas (ROX,RWX) in the same namespace when mapping PVC.
StorageClass
Each StorageClass contains fields provisioner and parameters, which are used when the PV to which it belongs requires dynamic provisioning.
The naming of the StorageClass object is very important, it is the way that the user requests a specified level. When a StorageClass object is created, the administrator sets the name of the level and other parameters, but the object is not updated immediately after creation.
The administrator can specify a default StorageClass for binding to PVC that does not request a specified level. Please refer to the PersistentVolumeClaim section for more information.
Kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: standardprovisioner: kubernetes.io/aws-ebsparameters: type: gp2Provisioner
StorageClass has a storage vendor, provisioner, which is used to determine which volume plug-ins are available to PV. This field must be established.
You are not limited to specifying the "internal" suppliers listed here (whose name is prefixed with "kubernetes.io" and distributed with Kubernetes). You can also run and specify external vendors, which are stand-alone programs that follow the specifications defined by Kubernetes. The authors of external providers have full autonomy over the life cycle of the code, how the vendor is distributed, health, and the volume plug-ins used (including Flex). The library kubernetes-incubator/external-storage holds a library for writing external storage vendors that implement a large number of specifications and are maintained by various communities.
Parameters.
StorageClass has some parameters to describe the volume that belongs to the StorageClass. Different storage providers may require different parameters. For example, the value io1 corresponding to the parameter type, and the parameter iopsPerGB, are all parameters specific to EBS. When a parameter is omitted, its default value is used.
AWS
...
GCE
...
Glusterfs
...
OpenStack Cinder
...
VSphere
...
Ceph RBD apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast provisioner: kubernetes.io/rbd parameters: monitors: 10.16.153.105:6789 adminId: kube adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: kube userId: kube userSecretName: ceph-secret-user
Monitor for ● monitors:Ceph, separated by commas. This parameter is required.
Client ID of ● adminId:Ceph, which creates a mirror in pool. The default is "admin".
The namespace of the ● adminSecretNamespace:adminSecret, and the default value is "default".
SecretName for ● adminSecretName:adminId. Parameter modification is required, and the key provided must have the type "kubernetes.io/rbd".
RBD pool for ● pool:Ceph. The default value is "rbd".
Customer ID of ● userId:Ceph, which is used to map RBD images. The default value is the same as the adminId parameter.
The name of the ● userSecretName:Ceph Secret, which userId uses to map the RBD image. It must be in the same namespace as PVC. This parameter is also required. The key provided must have the type "kubernetes.io/rbd". For example, create it as follows:
Kubectl create secret generic ceph-secret-- type= "kubernetes.io/rbd"-- from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ=='-- namespace=kube-systemQuobyte
...
Azure Disk
...
Portworx Volume
...
ScaleIO
...
Configuration
If you are writing configuration templates and examples for use in clusters that require persistent storage, we recommend that you use the following patterns:
● includes PVC objects in your bundle configuration (such as Deployment, ConfigMap fat).
● should not include PersistentVolume objects in the configuration, because the user who instantiates the configuration may not have permission to create a PersistentVolumes
● provides the user with the option to store the class name when the user provides an instantiated template.
▷ if the user provides a StorageClass name and the Kubernetes version is 1.4 or above, set this value on the volume.beta.kubernetes.io/storage-class annotation of the PVC. This will make the PVC match to the correct StorageClass.
▷ if the user does not provide a StorageClass name, or if the cluster version is 1.3, it will take a long time to set the volume.alpha.kubernetes.io/storage-class: default tag in the PVC configuration.
☞ this allows PV to be dynamically provided to users in some clusters with sound default configurations.
Although ☞ contains the alpha word in its name, the corresponding code for this annotation is supported at the beta level.
☞ do not use volume.beta.kubernetes.io/storage-class, no matter what value is set, even an empty string. Because it blocks the DefaultStorageClass license controller.
● in your tool, monitor the PVC that has not been bound for a while and show it to the user. Because this may indicate that the cluster does not support dynamic storage (at this point we should create a matching PV), or that the cluster does not have a storage system (where the user cannot deploy a PVC).
In the future of ●, we expect that most clusters can enable DefaultStorageClass and have some storage forms available. However, there may be no StorageClass that can be transported in all clusters, so don't set only one by default. At some point, the alpha annotation will no longer make sense, but resetting the storageClass field of the PVC will have the desired effect.
Thank you for reading! This is the end of this article on "what is the use of Persistent Volumes in Kubernetes storage". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.