In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
A summary of the concept of 1 storage volume
POD has its own lifecycle, so the containers and related data running internally cannot be persisted. Docker supports configuring containers to use storage volumes to persist data outside their own file systems, which can be node file systems or network file systems. The storage volumes provided by the corresponding Kubernetes belong to the POD resource level, share all containers within the POD, can be used to store application-related data outside the brave file system, and even achieve data persistence independent of the POD lifecycle.
Storage volume: a shared directory defined on top of a POD resource that can be mounted by all its internal containers. It is associated with the storage space on an external storage device, thus independent of the container's own file system, and whether the data is persistent depends on whether the storage volume itself can support persistence.
2 Storage Volume Type 1 Node level Storage Volume
Empty: its life cycle is the same as that of POD resources
HostPath: although it can achieve persistent storage, if the POD is scheduled to another node, the storage resources of that node need to be migrated to the specified node, otherwise it will not be persistent.
2 Network level storage volum
NFS
Ceph
GlusterFS
...
It can realize the persistent storage of data.
3 Special storage volume
Secret: used to pass sensitive information to POD, such as passwords, private keys, certificates, etc.
Configmap: used to inject non-sensitive data, such as configuration files, into POD, which enables centralized definition and management of container configuration files
3 Storage volume usage mode 1 is specified and configured directly in POD
The storage defined in POD consists of two parts:
1 pods.spec.volumes: used to define the list of storage volumes mounted on POD and the source of storage
Core field
Pods.spec.volumes.name: used to define the name of this storage volume, which needs to be associated in the following mount
Pods.spec.volumes.TYPE: used to specify the type of this storage volume, such as emptydir, NFS, gitRepo, etc.
2 pods.spec.containers.volumeMounts: used to define where the storage volume is mounted to the POD and its associated mount form
Core field
Pods.spec.containers.volumeMounts.mountPath: used to define the specific location where to mount to the target container, such as / usr/share/nginx/html
Pods.spec.containers.volumeMounts.name: used to refer to the above storage volume name.
Pods.spec.containers.volumeMounts.readOnly: specifies whether the storage volume is read-only. The default is false, and the volume is read-write storage.
Pods.spec.containers.volumeMounts.subPath: defines the subpath used when mounting the storage volume, and uses a subpath as its mount point under the path specified by mountPath.
2 Overview of referencing and configuring two temporary storage volumes 1 through PV and PVC
The life cycle of emptyDir storage volume is the same as its POD, which is long used for data caching or temporary storage, data sharing, and so on.
GitRepo storage volume can be regarded as a practical application of emptyDir storage volume. The POD creation resource of the storage volume can access the data in the specified code warehouse by mounting the directory. When creating the POD resource of the gitRepo storage volume, it will first create an empty directory (emptyDir) and clone the data from a specified GIT repository to the directory, and then create a container and mount the storage volume.
2 emptyDir Storage Volum
Pods.spec.volumes.emptyDir mainly contains two fields.
Pods.spec.volumes.emptyDir.medium: the type of storage media in which this directory is located. The value is "default" or "memory". The default is "default", which indicates that the node's default storage medium is used. "memory" means RAM-based temporary file system tmpfs, which is limited by memory but has good performance. It is usually used to provide cache space in containers.
Pods.spec.volumes.emptyDir.sizeLimit: used to specify the limit of the current storage volume. The default is nil, which means there is no limit.
Example
# [root@master1 emptyDir] # cat demo.yaml apiVersion: v1kind: Podmetadata: name: demo-empty namespace: defaultspec: volumes: # define the mounted storage volume-name: html # specify the storage volume name: {} # specify the storage volume type containers:-name: nginx1 image: nginx:1.14 ports:-name: Http containerPort: 80 volumeMounts: # specify mount storage volume-name: html # specify mount storage volume name mountPath: / usr/share/nginx/html # specify mount storage volume directory readOnly: true # specify mount storage volume read and write mode-name: nginx2 # configure another container Used to specify write temporary storage volume image: alpine volumeMounts:-name: html mountPath: / html command: ["/ bin/sh", "- c"] # configure write storage volume data args:-while true Do echo $(date) > > / html/index.html; sleep 5; done
Deployment
Kubectl apply-f demo.yaml
View
test
3 gitRepo Storage Volume 1 Core Field
Pods.spec.volumes.gitRepo.repository: git repository URL, which is a required field
Pods.spec.volumes.gitRepo.directory: target directory name, which cannot contain ".." Character, "." Means to copy the data in the warehouse directly to the volume directory, otherwise, to copy to a subdirectory with the name of the user-specified string in the volume directory.
Pods.spec.volumes.gitRepo.revision: the hash code submitted by a specific revision.
The GIT program must be installed on the work node running using the POD resource of the gitRepo storage volume, otherwise the operation of the clone repository will not be completed. Since 1.12, the gitRepo storage volume has been discarded.
2 practice 1 install git
Yum-y install git
2 create an instance and mount the storage volume [root@master em] # cat demo.yaml apiVersion: v1kind: Podmetadata: name: demo namespace: defaultspec: containers:-name: demo image: nginx:1.14 ports:-name: http containerPort: 80 volumeMounts:-name: html mountPath: / usr/share/nginx/html Volumes:-name: html gitRepo: directory: "." Repository: https://gitee.com/ChangPaoZhe/test revision: "develop" 3 deployment kubectl apply-f demo.yaml 4 View and test
Overview of three-node storage volume 1
A storage volume of type Hostpath when a directory or file of a file system on a work node is mounted to a POD. It can be independent of the life cycle of POD resources, so it is persistent, but it is local, so it can only be applied to the usage needs of storage volumes in specific situations.
2 Core field
There are two nested fields that configure hostPath storage volumes:
1 required field to specify the directory path on the worker node
Pods.spec.volumes.hostPath.path
2 specifies the storage volume type type, which supports volume types that include
Pods.spec.volumes.hostPath.type
A DirectoryOrCreate: if the specified path does not exist, it is automatically created as an empty directory with permission of 0755, and all the master groups are kubelet.
B Directory: the file path that must exist
C FileOrCreate: if the specified path does not exist, it will automatically create an empty file with a permission of 0644. The owner and the group are also kubelet.
D File: file path that must exist
E Socket: the socket file path that must exist
F CharDevice: character device file path that must exist
G BlockDevice: must exist block device file path
3 instance # [root@master em] # cat demo1.yaml apiVersion: v1kind: Podmetadata: name: demo1spec: containers:-name: demo1 image: ikubernetes/filebeat:5.6.7-alpine env: # set the environment variable-name: REDIS_HOST value: redis.ilinux.io:6379-name: LOG_DEVEL value: info VolumeMounts: # set container mount-name: varlog mountPath: / var/log-name: socket mountPath: / var/run/docker.sock-name: varlib mountPath: / var/lib/docker/containers readOnly: true volumes:-name: varlog # set node mount Mount name hostPath: # set the node mount path path: / var/log-name: varlib hostPath: path: / var/lib/docker/containers-name: socket hostPath: path: / var/run/docker.sock
Deployment
Kubectl apply-f demo1.yaml
This kind of Pod resources are usually controlled by daemonset-type POD controllers, which are responsible for collecting system-level relevant data on each work node in the cluster, but the files of different nodes may not be the same. Therefore, the satisfaction status of those files or directories that must exist beforehand may also be different, and the files or directories created in the nodes are writable only by root by default. If you expect the process in the container to have write permissions, either set it as a privileged container or modify the directory path permissions on the node.
Overview of four network storage volumes 1
Dedicated network storage volumes:
1 traditional NAS or SAM devices (such as NFS,iscsi,fc)
2 distributed storage (clusterFS,RBD)
3 Cloud Stora
4Abstract management layer built on all kinds of storage systems, etc.
2 NFS Storage Volume 1 Core Field
Pods.spec.volumes.nfs
The NFS storage volume is only unmounted rather than deleted after the termination of the POD object, which can achieve scheduled persistent storage.
Core fields:
Pods.spec.volumes.nfs.server: used to specify the IP address or hostname of the NFS server, required field
Pods.spec.volumes.nfs.path: file system path everywhere in the NFS service, required field
Pods.spec.volumes.nfs.readOnly: whether to mount read-only. Default is false.
2 examples
1 deploy NFS service without password. The deployment results are as follows
2 configure related instances and deploy them
ApiVersion: v1kind: Podmetadata: name: demo2 namespace: defaultspec: containers:-name: demo2 image: nginx:1.14 ports:-name: http containerPort: 80 volumeMounts:-name: html mountPath: / usr/share/nginx/html volumes:-name: html nfs: Server: 192.168.90.110 path: / data/nfs/v1 readOnly: false
Deployment
Kubectl apply-f demo2.yaml
View
Configure related web pages
Test result
.
3 GlusterFS Storage Volume 1 Overview
GlusterFS (cluster filesystem) is an open source distributed file system, which is the core of the horizontally scalable storage solution gluster. Glusterfs can support several PB storage capacity and handle thousands of clients through expansion. Glusterfs aggregates physically distributed storage resources together with TCP/IP or infiniBand RDMA networks, and uses a single global namespace to manage data. In addition, glusterfs is based on stackable user space design. It provides excellent performance for various data loads and is another popular distributed storage solution. To configure POD resources to use glusterfs storage volumes, you need to:
1 there is an available glusterfs storage cluster
2 create a volume in the glusterfs cluster that can meet the needs of POD resource data storage
3 install the glusterfs client package on each node in the k8S cluster
In addition, if you need to use the dynamic provisioning mechanism of storage volumes based on glusterfs, you also need to deploy heketi in advance, which is used to provide restful-style management interfaces for glusterfs clusters. The configuration of glusterfs laps and hekei includes
2 Core field
Endpoints: the name of the endpoints resource, which must exist in advance and is used to provide some node information of the gluster cluster as its access entry. Required field
Path: required field for the volume path to the glusterfs cluster, such as kube-redis
ReadOnly: whether it is a read-only volume
Be careful
For example deployment, please refer to: https://blog.csdn.net/phn_csdn/article/details/75153913?utm_source=debugrun&utm_medium=referral
Need to install on the node
Yum install-y glusterfs glusterfs-fuse
Example
[root@master all] # cat demo4.yaml apiVersion: v1kind: Endpointsmetadata: name: gluster # set name subsets:-addresses: # set its relevant hostname and port number-ip: 192.168.90.120 ports:-port: 24007 name: glusterd-addresses:-ip: 192.168.90.110 ports:-port: 24007 Name: glusterd---apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: demo-gfs-dep namespace: defaultspec: replicas: 3 selector: matchLabels: app: gfs template: metadata: name: demo-gfs namespace: default labels: app: gfs spec: containers:-name : demo-gfs image: nginx:1.14 ports:-name: http containerPort: 80 volumeMounts:-name: html mountPath: / usr/share/nginx/html volumes:-name: html Glusterfs: # configure mount type endpoints: gluster # endpoint name path: models # glusterfs shared directory block name Directory structure readOnly: false that is not its shared directory
Deployment
Kubectl apply-f demo4.yaml
View
A node creates a folder and other nodes are visible.
Inject content into it
Object storage node view
Access test
Overview of persistent storage volume 1
The disadvantage of the above network storage volume is that administrator users must clearly understand the access details of the network storage system used in order to complete the configuration tasks related to the storage volume, which deviates from kubernetes's goal of hiding the underlying architecture from users and developers. The good thing for storage resources is that, like computing resources, users and developers do not need to know about the mount of POD resources. There is no need to know what the storage system is and where it is located. For this reason, kubernetes's PersistentVolume subsystem adds an abstraction layer between the user and the administrator, thus directly decoupling the use and management of the storage system.
2 persistentVolume (PV) 1 Overview
For the abstraction of the underlying shared storage, the shared storage is regarded as a resource that can be applied for by users, and the "storage consumption" mechanism is realized. Through the storage plug-in, PV supports the use of a variety of back-end storage systems such as network storage or cloud storage. Such as NFS, RBD and Cinder. PV is a cluster-level resource, which does not belong to any namespace. Users' use of PV resources requires PVC (persistendVolumeClaim) to apply for binding. They are consumers of PV resources who apply to PV for a specific size of space and related access modes to create PVC storage volumes, which are then used by POD resources through persistentVolumeClaim storage volumes.
Although PVC enables users to access storage resources in an abstract way, many attributes of PV are often involved, and the deviation in the connection between the two will inevitably lead to the need of users can not be effectively met in time. Since version 1.4 of kubernetes, the storageclass resource type has been introduced, which can be used to define storage resources as categories with significant characteristics (class) rather than specific PV. Users apply directly to the category of interest through PVC, matching the PV created in advance by the administrator, or the PV that is dynamically created for users on demand, thus eliminating the process of creating PV in advance.
2 create PV1 PV component pv.spec.capacity: used to define the capacity of the current PV, support space to set pv.spec.accessModes: access mode, ReadWriteOnce: can only be mounted by a single node read and write, command line abbreviated as RWO ReadOnlyMany: can be read-mounted by multiple nodes, command abbreviated as ROX ReadWriteMany: can be read, written and mounted by multiple nodes at the same time, command line abbreviation is RWX
Read and write modes supported by each storage volum
Pv.spec.persistentVolumeReclaimPolicy: the processing mechanism when PV space is freed. The available type is Retain (default, reserved), recycle (recycling) or delete (delete) Retain. The administrator manually handles recycle: space recycling, and deletes all files under the storage volume directory. Currently, only NFS and hostPath support this operation. Delete: delete storage volumes, only some cloud storage systems support it, such as AWS GCE Azure disk and cinder pv.spec.volumeMode: volume type. Used to specify whether this volume can be used as a file system or a bare device. Default is file system pv.spec.storageClassName: the storageClass name to which the current PV belongs. Default is empty pv.spec.mountOptions: a list of mount selections, such as ro, soft or hard. 2 create PV [root@master1 pv] # cat demo.yaml apiVersion: v1kind: PersistentVolumemetadata: name: pv1 labels: app: v1spec: accessModes: ["ReadWriteMany"] # set the access mode It can set multiple capacity: # define PV storage size storage: 5Gi nfs: # set NFS parameter path: / data/v1 # set NFS mount address server: 192.168.1.100 # set NFS server address It can be the domain name readOnly: false # set its read and write-apiVersion: v1kind: PersistentVolumemetadata: name: pv2 labels: app: v2spec: accessModes: ["ReadWriteOnce", "ReadWriteMany"] # set the access mode It can set multiple capacity: # define PV storage size storage: 5Gi nfs: # set NFS parameter path: / data/v2 # set NFS mount address server: 192.168.1.100 # set NFS server address It can be the domain name readOnly: false # set its read and write-apiVersion: v1kind: PersistentVolumemetadata: name: pv3 labels: app: v3spec: accessModes: ["ReadWriteMany", "ReadOnlyMany"] # set the access mode It can set multiple capacity: # define PV storage size storage: 10Gi nfs: # set NFS parameter path: / data/v3 # set NFS mount address server: 192.168.1.100 # set NFS server address It can be the domain name readOnly: false # set its readable and writable-apiVersion: v1kind: PersistentVolumemetadata: name: pv4 labels: app: v4spec: accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"] # set the access mode It can simultaneously set multiple capacity: # define PV storage size storage: 5Gi nfs: # set NFS parameter path: / data/v4 # set NFS mount address server: 192.168.1.100 # set NFS server address, which can be the domain name readOnly: false # set its readable 3 deployment kubectl apply-f demo.yaml4 view
5 PV resource status
Available: free resource in available state, not yet bound by PVC
Bound: PVC that has been bound
Released: the bound PVC has been deleted, but the resource has not been reclaimed by the cluster
Failed: failed due to failure of automatic resource recovery
3 Overview of creating PVC1
PersistentVolumeClaim is a storage volume type resource, which is created by applying to occupy a PV. It has an one-to-one relationship with PV. Users do not need to care about the underlying implementation details, but only need to specify the target space, access mode, PV tag selector, storageClass and other related information.
2 PVC core field pv.spec.accessModes: PVC access mode, same as PV definition ReadWriteOnce: can only be read-write mount by a single node, command line abbreviation is RWO ReadOnlyMany: can be read-only mount by multiple nodes, command abbreviation is ROX ReadWriteMany: can be read-write mount by multiple nodes at the same time, command line abbreviation is RWX pvc.spec.resources: amount of resources to be occupied by the current PVC storage volume Currently, only space size is supported, which includes limit and requestpvc.spec.selector: define PVC to select PV using tags, and only PV that meets this selection can be associated to this PVC pvc.spec.storageClassName: the name of the storage class on which pvc.spec.volumeMode: volume type is used to specify that this volume can be used for file systems that should be block devices in bare format Default is file system pvc.spec.volumeName: used to directly specify the volume name of the PV to be bound 3 create PVC [root@master1 pv] # cat demopvc.yaml apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc1 labels: app: pvc1spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 5Gi selector: # use the tag to select PV2 matchLabels: App: v2---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc2 labels: app: pvc2spec: accessModes: ["ReadWriteOnce" "ReadWriteMany", "ReadOnlyMany"] # using pattern matching can only be PV4 resources: requests: storage: 5Gi---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc3 labels: app: pvc3spec: accessModes: ["ReadWriteMany"] resources: # use size matching as PV3 requests: storage: 8Gi
Create PVC
Kubectl apply-f demopvc.yaml
View Servic
4 PVC binds the POD1 core field
Pods.spec.volumes.persistentVolumeClaim.claimName
The name of the bound PVC (required field)
Pods.spec.volumes.persistentVolumeClaim.readOnly
Bound mode
2 instance # [root@master1 pv] # cat pod.yaml apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: dem-po namespace: defaultspec: selector: matchLabels: app: demo1 replicas: 3 template: metadata: name: dem-dem namespace: default labels: app: demo1 spec: containers: -name: dem-po image: nginx:1.14 volumeMounts:-name: html mountPath: / usr/share/nginx/html volumes:-name: html persistentVolumeClaim: claimName: pvc1
Deployment
Kubectl apply-f pod.yaml
View
test
4 Storage Class 1 concept
Storage class (StorageClass) is referred to as sc, which is one of the resource types of kubernetes. It is a category created by administrators on demand to manage PV, which can be understood as the feature description of PV.
Achieve the effect of matching the front end by transmitting information to the back-end storage
One of the advantages of storage class is that it supports the dynamic creation of PV. When persistent storage is used, you need to create a PVC to bind the matching PV. This kind of operation is in great demand, or when the PV created by the administrator can not meet the PVC requirements, dynamically creating an adaptive PV according to the requirement standard of PVC will bring great flexibility to storage management.
PVC obtains the PV by applying for the storage class
The name of the storage class object is very important. It is the identity of the user call. In addition to the name, three key fields need to be defined for the storage class: provisioner (sc.provisioner provider), parameter (parameter sc.provisioner) and reclaimPolicy (recycling policy sc.reclaimPolicy).
2 the field in the core field 1 storageClass spec storageClass Spec is the most important field when defining a storage class, which contains five available fields provisioner (supplier): that is, the storage system that provides the storage resource, and the storage class needs to rely on provisioner to determine which storage plug-in to use in order to adapt to the target storage system. K8S has a variety of suppliers, all of which are prefixed with kubernetes.io. It also allows users to customize provisioner according to the kubernetes specification. 2 parameters (parameter): the storage class uses parameters to describe the storage volume to be associated. The parameters available for different provisioner are different. 3 reclaimPolicy: define the recycling policy. Available for delete and retain,4 volumBindingMode: define how to supply and bind for PVC. Default is "volumeBindin immediate". This option takes effect only when storage volume scheduling is enabled. 5 mountOptions: a list of mount options for PV dynamically created by the current class
Be careful
For the above required environment, please refer to: https://www.cnblogs.com/breezey/p/8849466.html
There must be heketi key on the kubernetes node, heketi-client software is required, and kernel module modprobe dm_thin_pool must be loaded.
If there is password authentication, the password of the hekei before it is encoded
Password authentication is found in subordinate instances
ApiVersion: v1kind: Secretmetadata: name: heketi-secret namespace: defaultstringData: # set its password key: Admintype: kubernetes.io/glusterfs---apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata: name: glusterfs namespace: defaultprovisioner: kubernetes.io/glusterfsparameters: resturl: "http://192.168.90.110:8080" # calls its interface clusterid:" 84d3dcbeb048089c18807b7be943d219 "# query its ID and configure restauthenabled:" True "# Authentication mode is true restuser:" admin "# authenticated username secretNamespace:" default "# Namespace secretName:" heketi-secret "# Authentication invokes secret name # restuserkey:" adminkey "# here you can write the password gidMin:" 40000 "gidMax:" 50000 "volumetype:" replicate:2 "# to set its type.
Deployment
Kubectl apply-f sc.yaml
View
2 dynamic PV supply
To enable dynamic PV provisioning, at least one storage class needs to be created by the administrator, and different provisoner can be created in different ways.
Spending storage resources for storage classes
At present, there are two ways to specify the storage class resources used in the definition of PVC: one is to use pvc.spec.storageClassName, and the other is to use "volume.beta.kubernetes.io/storage-class" resource annotations. It is recommended to use the first method to avoid misconfiguration problems caused by different settings.
Any storage system that supports PV dynamic provisioning can be dynamically applied for use by PVC after it is defined as a storage class. Storage volume is a necessary resource, and it will change with the change of scale.
Configure PVC
# [root@master all] # cat pvc1.yaml kind: PersistentVolumeClaimapiVersion: v1metadata: name: glusterfs-nginx namespace: default annotations: volume.beta.kubernetes.io/storage-class: "glusterfs" # specify here to get spec: accessModes:-ReadWriteMany resources: requests: storage: 2Gi for glusterfs mode
Deployment
Kubectl apply-f pvc1.yaml
View
5 Overview of PV and PVC Lifecycle 1
PV is a cluster-level resource, while PVC is a resource requirement. If PVC initiates an application for PV, there is an one-to-one corresponding relationship between PV and PVC. The PV that can be used to respond to PVC must be able to accommodate the request conditions of PVC.
2 storage supply 1 static supply
Static provisioning is the resource supply mode of a certain number of PV manually created by the cluster administrator. These PV are responsible for handling the details of the storage system and abstracting them into easy-to-use storage resources for users to use. The PV provided statically may or may not belong to a storage class.
2 dynamic supply
When there is no static PV matching the user's PVC application, the kubernetes cluster will try to dynamically create a PV that meets the demand for the PVC, which is called dynamic provisioning. This method depends on the binding of the storage class. PVC must issue a request for dynamically allocating PV to a pre-existing storage class. PVC requests without a specified storage class will be prohibited from creating PV dynamically.
3 Storage binding
After a user defines a PVC based on a series of storage requirements and access patterns, the controller of the kubernetes system will find a matching PV for it, and establish a relationship between the two after finding it, and then their state changes to a bound state. If the PV is dynamically created for PVC, the PV is dedicated to its PVC.
If a matching PV cannot be found for PVC, the PVC will remain unbound until a qualified PV appears and the binding is completed.
1 Storage usage
Based on the definition of the pod.spec.volumes.persistentVolumeClaim volume type, the pod resource associates the selected PVC as a storage volume, which can then be used by internal containers. For storage volumes that support multiple access, users need to specify additional usage modes. Once the storage volume is mounted to a container in the POD object, its application can use the associated PV to provide storage space.
2 PVC protection
To avoid data loss caused by the removal of storage volumes in use, since kubernetes 1.9, when POD resources occupy this PVC, it cannot delete PVC.
4 Storage recovery
After completing the usage goal of the storage volume, you can delete the PVC object for resource recycling. The strategy is as follows
1 retention
Retention means that after deleting the PVC, the kubernetes will not automatically delete the PVm but just put it in the released state. However, the PV in this state cannot be bound by other PVC applications, because the previously successfully bound data exists and the subsequent processing plan needs to be manually decided by the administrator, which means that if you want to use such PV resources again, you need the administrator to do the following
1 delete the PV, after which the data for this PV still exists in the external storage
2 manually clean up the data left over the storage
3 manually delete storage volumes at the storage system level to free space for re-creation or direct re-creation of the PV
2 recovery
If supported by the underlying storage plug-in, the resource recovery policy performs a data delete operation on the storage volume and makes the PV resource claIm again. In addition, the administrator can configure a custom collector POD template to perform custom recycling operations
3 delete delete
For storage plug-ins that support delete recycling policy, PV objects will be removed directly after PVC is deleted, as well as storage assets on external storage systems related to PV. Storage systems that support this operation are AWS EBS, GCE PD, Azure Disk or Cinder. The collection of dynamically created PV resources is different from the definition on the related storage class, and the default policy on the storage class is Delete.
4 extended PVC
Kubernetes has added a specific feature of extending PV space since version 1.8. Currently, the storage volumes supported by the extended PVC mechanism are:
GcePersistentDisk
AwsElasticBlockStore
Cinder
Glusterfs
Rbd
The "PersistentVolumeClaimResize" admission plug-in is responsible for performing more validation operations on storage volumes that support space size changes, and administrators need to enable this plug-in in advance to use the PVC extension mechanism
For storage volumes that contain file systems, file system resizing is performed only when the new POD resource starts using PVC based on read-write mode, in other words, if an extended storage volume is already used by the POD resource, the POD object needs to be recreated to initiate the file system sizing operation. File systems that support space adjustment are XFS, EXT3, and EXT4
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.