Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand container storage and K8s storage volume in cloud native storage

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article shows you how to understand container storage and K8s storage volumes in cloud native storage. The content is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

Two key areas of cloud native storage: Docker volume and K8s volume

Docker storage volume: container service storage organization in a single node, focusing on data storage and container runtime related technologies

K8s storage volume: pay attention to the storage arrangement of the container cluster and the storage service from the point of view of the storage used by the application.

Docker storage

One of the major advantages of container services that are so popular is the organization of container images when running containers. Through the technology of reusing container images, containers can share an image resource (or more specifically, share a certain image layer) on the same node, avoiding copying and loading image files every time the container is started. This method not only saves the storage space of the host, but also improves the efficiency of container startup.

1. Container read-write layer

In order to improve the efficiency of node storage, containers not only share mirror resources among different running containers, but also share data between different mirrors. The principle of sharing mirrored data: mirroring is composed of layers, that is, a complete mirror contains multiple data layers, and each layer of data overlays each other to form the final complete mirror.

In order to share mirrored data among multiple containers, each layer of the container image is read-only. Through practice, we know that when using an image to start a container, you can actually read and write freely in the container. How is this achieved?

When the container uses mirroring, a read-write layer is added at the top of multiple image layers. When each container is running, a read-write layer is mounted on its top layer based on the current image, and all user operations on the container are completed in the read-write layer. Once the container is destroyed, the read-write layer is also destroyed.

As the example shown in the figure above, there are three containers on a node, each running based on two images.

The mirror storage layer is described as follows:

This node contains a total of six mirror layers: Layer 1 mirror 6.

Mirror 1 consists of: Layer 1, 3, 4, 5

Mirror 2 consists of Layer 2, 3, 5 and 6.

So the two mirrors share the Layer 3 and 5 mirror layers.

Container storage description:

Container 1: start with Mirror 1

Container 2: start with Mirror 1

Container 3: start with Mirror 2

Container 1 and container 2 share image 1, and each container has its own writable layer

Container 1 (2) and container 3 share image 2 layers (Layer3, 5)

As can be seen from the above example, data sharing through container image layering can greatly reduce the resource requirements of CCS for host storage.

The container read-write layer structure is given above, and the read-write principles are as follows:

For reading: the container is composed of so many layers of data, when different levels of data are duplicated, the principle of reading is that the upper layer data covers the lower layer data.

For writing: when the container modifies a file, it is done at the top read-write layer. The main implementation technologies are: write-time replication, time-use configuration.

1) copy when writing

Copy-on-write (CoW:copy-on-write), which means to copy only when you need to write, is a modified scenario for existing files. CoW technology allows all containers to share image's file system, and all data is read from image. Only when a file is to be written, the file to be written is copied from image to the top read-write layer for modification. Therefore, no matter how many containers share the same image, the write operation is done on the copy after copying from the image, and the source file of the image will not be modified, and multiple containers operate on the same file, which will generate a copy in the file system of each container. Each container modifies its own copy, which is isolated from each other and does not affect each other.

2) time configuration

Time allocation: in scenarios where there is no file in the mirror, space is allocated only when a new file is to be written, which can improve the utilization of storage resources. For example, starting a container does not pre-allocate some disk space for the container, but allocates new space as needed when a new file is written.

two。 Storage driver

Storage driver refers to how to manage the data of each layer of the container, which has achieved the effect of sharing, reading and writing. That is, the container storage driver realizes the storage and management of container read-write layer data. Common storage drivers:

AUFS

OverlayFS

Devicemapper

Btrfs

ZFS

Taking AUFS as an example, let's talk about how storage drivers work:

AUFS is a federated file system (UFS), which is a file-level storage driver.

AUFS is a layered file system that transparently overlays one or more existing file systems, merging multi-tier file systems into a single-layer representation. That is, it supports mounting different directories to the file system under the same virtual file system.

You can overlay and modify files layer by layer, the bottom of which is read-only, and only the top file system is writable.

When you need to modify a file, AUFS creates a copy of the file, uses CoW to copy the file from the read-only layer to the writeable layer for modification, and the result is also saved in the writable layer.

In Docker, the bottom read-only layer is image, and the writable layer is the Container runtime.

Other storage drivers will not be described in detail here, interested students can go to the Internet to inquire about information.

3. Introduction to Docker data volumes

The read and write data of the application in the container occurs in the read and write layer of the container. The image layer + read and write layer is mapped to the internal file system of the container, which is responsible for the underlying architecture of the internal storage of the container. When we need the internal application of the container to interact with the external storage, we need an external storage similar to the computer U disk, and the container data volume provides this function.

On the other hand: the stored data of the container itself is temporarily stored, and the data will be deleted when the container is destroyed. If the external storage is mounted to the container file system through the data volume, the application can refer to the external data or persist the data produced by itself into the data volume, so the container data volume is the way for the container to persist the data.

Container storage composition: read-only layer (container image) + read-write layer + external storage (data volume)

Container data volumes can be divided into stand-alone data volumes and cluster data volumes from the scope of action. Stand-alone data volume is the data volume mounting ability of CCS on a node, docker volume is the representative implementation of stand-alone data volume, cluster data volume is concerned with cluster-level data volume scheduling capability, and K8s data volume is the main application mode of cluster data volume.

Docker Volume is a directory that can be used by multiple containers. It bypasses UFS and includes the following features:

Data volumes can be shared and reused between containers

Compared with the writable layer implemented by storage driver, data volume read and write is directly external storage for read and write, which is more efficient.

Updates to data volumes are read and written to external storage, and will not affect the mirror and container read and write layers.

The data volume can exist until there is no container to use.

1) Docker data volume type

Bind: Mount the host directory / file directly into the container.

The absolute path on the host needs to be used, and the host directory can be created automatically

The container can modify any file in the mount directory, which makes the application more convenient, but it also brings security risks.

Volume: use this method when using third-party data volumes.

Volume command line instruction: docker volume (create/rm)

Is a function provided by Docker, so it cannot be used in a non-docker environment

It is divided into named data volume and anonymous data volume, and its implementation is consistent, but the difference is that the name of anonymous data volume is random code.

Support volume-driven extension to achieve access to more external storage types.

Tmpfs: a non-persistent volume type, stored in memory.

Data is easy to be lost.

2) Bind mount method syntax

-v: src:dst:opts only supports stand-alone version.

Src: indicates the volume mapping source, host directory or file, and needs to be an absolute address

Dst: the destination mount address in the container

Opts: optional, mount attributes: ro, consistent, delegated, cached, z, Z

Consistent, delegated, cached: configure shared propagation properties for mac systems

Z, z: configure the selinux label of the host directory.

Example:

$docker run-d-- name devtest-v / home:/data:ro,rslave nginx$ docker run-d-- name devtest-- mount type=bind,source=/home,target=/data,readonly,bind-propagation=rslave nginx$ docker run-d-- name devtest-v / home:/data:z nginx3) Volume mount method syntax

-v: src:dst:opts only supports stand-alone version.

Src: indicates the volume mapping source, data volume name, empty

Dst: target directory in the container

Opts: optional, mount attribute: ro (read-only).

Example:

$docker run-d-name devtest-v myvol:/app:ro nginx$ docker run-d-name devtest-mount source=myvol2,target=/app,readonly nginx4. Use of Docker data volumes

How Docker data volumes are used:

1) Volume type

Anonymous data volume: docker run-d-v / data3 nginx

A directory will be created by default on the host: / var/lib/docker/volumes/ {volume-id} / _ data for mapping

Named data volume: docker run-d-v nas1:/data3 nginx

If the nas1 volume is not currently found, a volume of the default type (local) is created.

2) Bind mode

Docker run-d-v / test:/data nginx

If there is no / test directory on the host, this directory is created by default.

3) data volume container

The data volume container is a running container, and other containers can inherit the mounted data volumes in this container, then all mounts of this container will be reflected in the reference container.

Docker run-d-volumes-from nginx1-v / test1:/data1 nginx

Inherits all data volumes from the configuration container and contains self-defined volumes.

4) Mount and propagation of data volumes

Docker volume supports the configuration of mount propagation: Propagation.

Private: mounts do not spread, and mounts in both the source and destination directories will not be reflected on the other side

Shared: mounts propagate between source and destination

Slave: the mount of the source object can be propagated to the destination object, and vice versa

Rprivate: recursive Private, default mode

Rshared: recursive Shared

Rslave: recursive Slave.

Example:

$docker run-d-v / home:/data:shared nginx indicates that the directory mounted under host / home is available under container / data, and vice versa; $docker run-d-v / home:/data:slave nginx indicates that the directory mounted under host / home is available under container / data, and vice versa; 5) visibility of data volume mount

Volume mount visibility:

Local empty directory, mirror empty directory: no special treatment

Local empty directory, mirror non-empty directory: the contents of the mirror directory are copied to the host; (copy, not mapping; even if the container deletes the content, it will be saved)

Local non-empty directory, mirror empty directory: local directory contents are mapped to containers

Local non-empty directory and mirror non-empty directory: the contents of the local directory are mapped to the container, and the contents of the container directory are hidden.

Bind mount visibility: subject to the host directory.

Local empty directory, mirror empty directory: no special treatment

Local empty directory, mirror non-empty directory: container directory becomes empty

Local non-empty directory, mirror empty directory: local directory contents are mapped to containers

Local non-empty directory and mirror non-empty directory: the contents of the local directory are mapped to the container, and the contents of the container directory are hidden.

5. Docker data volume plug-in

Docker data volumes implement the way that external storage of the container is mounted to the container file system. In order to expand the container's demand for external storage types, docker proposes to mount different types of storage services by means of storage plug-ins. Extension plug-ins, collectively known as Volume Driver, can develop a storage plug-in for each storage type.

Multiple storage plug-ins can be deployed on a single node

A storage plug-in is responsible for a storage type of mount service.

Docker Daemon communicates with Volume driver in the following ways:

Sock file: linux is distributed in the / run/docker/plugins directory

Spec file: / etc/docker/plugins/convoy.spec definition

Json file: / usr/lib/docker/plugins/infinit.json definition

Implement the interface:

Create, Remove, Mount, Path, Umount, Get, List, Capabilities

Examples of use:

$docker volume create-- driver nas-o diskid= ""-o host= "10.46.225.247"-o path= "/ nas1"-o mode= "--name nas1

Docker VolumeDriver is suitable for data volume management in a stand-alone container environment or swarm platform. With the popularity of K8s, there are fewer and fewer usage scenarios. For more information on VolumeDriver, please refer to: https://docs.docker.com/engine/extend/plugins_volume/

K8s storage volume 1. Basic concept

According to the previous description, in order to achieve the persistence of container data, we need to use the function of data volumes. How to define storage for the running load (Pod) in the K8s orchestration system? K8s is a container orchestration system, which focuses on the management and deployment of container applications in the whole cluster, so it needs to be considered from the perspective of cluster when considering K8s application storage. K8s storage volume defines the relationship between application and storage in K8s system. It contains the following concepts:

1) Volume data Volume

Data volumes define the details of external storage and are embedded in Pod as part of Pod. Its essence is a recording object stored externally in the K8s system. When the load needs to use external storage, the relevant information is found from the data volume and the storage mount operation is carried out.

The life cycle is the same as Pod, that is, when pod is deleted, the data volume also disappears (note that it is not data deletion)

The storage details are defined in the choreography template, and the application orchestration awareness stores the details.

Multiple volume can be defined simultaneously in a load (Pod), which can be the same type or different types of storage.

Each container of a Pod can reference one or more volume, and different container can use the same volume at the same time.

Common types of K8S Volume:

Local storage: such as HostPath, emptyDir, the characteristic of these storage volumes is that the data is stored on a specific node of the cluster, and the data is no longer available when the node goes down because of the elegant application.

Network storage: Ceph, Glusterfs, NFS, Iscsi and other types. The characteristic of these storage volumes is that the data is not on a node in the cluster, but on the remote storage service. When using storage volumes, the storage service needs to be mounted locally for use.

Secret/ConfigMap: these storage volume types whose data is some object information of the cluster and does not belong to a node. When in use, the object data is mounted to the node in the form of a volume for supply.

CSI/Flexvolume: these are two data volume expansion methods, which can be understood as abstract data volume types. Each expansion method can be subdivided into different storage types.

PVC: a data volume definition method that abstracts a data volume into an object independent of pod. The storage information of this object definition (association) is the real storage information corresponding to the storage volume, which is used by K8s load mount.

Some examples of volume templates are as follows:

Volumes:-name: hostpath hostPath: path: / data type: Directory--- volumes:-name: disk-ssd persistentVolumeClaim: claimName: disk-ssd-web-0-name: default-token-krggw secret: defaultMode: 420 secretName: default-token-krggw--- volumes:-name: "oss1" flexVolume: driver: "alicloud/oss" options: bucket : "docker" url: "oss-cn-hangzhou.aliyuncs.com" 2) PVC and PV

K8s storage volume is a cluster-level concept, and its object scope is the entire K8s cluster, not one node.

The K8s storage volume contains objects (PVC, PV, SC) that are independent of the application payload (Pod) and are associated by orchestration templates.

K8s storage volumes can have their own independent lifecycle and are not attached to Pod.

PVC is the abbreviation of PersistentVolumeClaim, translated as storage declaration; PVC is an abstract storage volume type in K8s, which represents the data volume representation of a specific type of storage. Its design intention is to separate the storage from the application choreography, abstract the storage details and realize the storage choreography (storage volume). In this way, the storage volume object in K8s exists independently of the application orchestration, which decouples the application from the storage at the orchestration level.

PV is the abbreviation of PersistentVolume, translated as persistent storage volume; PV represents a volume of a specific storage type in K8s, and the specific storage type and volume parameters are defined in its object. That is, all the relevant information of the target storage service is stored in PV, and K8s references the storage information in PV to perform the mount operation.

The relationship among application load, PVC and PV is as follows:

From an implementation point of view, as long as PV can achieve both storage and application choreography separation, but also achieve data volume mount, why use PVC + PV two objects? K8s is designed to secondary abstract the storage volume from an application point of view. Because PV describes specific storage types and needs to define detailed storage information, application layer users often do not want to know too much about the underlying details when consuming storage services, so it is not friendly to define specific storage services at the application orchestration level. At this time, the storage service is abstracted again, only the parameters of user relations are extracted, and PVC is used to abstract the lower-level PV. Therefore, PVC and PV focus on different objects. PVC focuses on users' storage needs and provides users with a unified storage definition, while PV focuses on storage details, which can define specific storage types, detailed parameters for storage mount, and so on.

When in use, the application layer declares a storage requirement (PVC), and K8s selects and binds a PV that meets the PVC requirements through the best match. Therefore, in terms of responsibility, PVC is the storage object needed by the application, which belongs to the application scope (and the application is in the same noun space); PV is the storage object of the storage plane, belonging to the whole storage domain (not belonging to a noun space).

Here are some properties of PVC and PV:

PVC and PV always appear in pairs. PVC must be bound to PV before it can be consumed by Pod.

There is an one-to-one binding relationship between PVC and PV. There is no case in which a PV is bound by multiple PVC, or a PVC binds multiple PV.

PVC is a storage concept at the application level, which belongs to a specific noun space.

PV is a storage concept at the storage level, which is cluster-level and does not belong to a noun space. PV is often managed by special storage operation and maintenance personnel.

In terms of consumption relationship: Pod consumes PVC,PVC consumes PV, while PV defines specific storage media.

3) detailed definition of PVC

The templates defined by PVC are as follows:

ApiVersion: v1kind: PersistentVolumeClaimmetadata: name: disk-ssd-web-0spec: accessModes:-ReadWriteOnce resources: requests: storage: 20Gi storageClassName: alicloud-disk-available volumeMode: Filesystem

The storage APIs defined by PVC include: read / write mode, resource capacity, volume mode, etc. The main parameters are described as follows:

AccessModes: the access mode of the storage volume. Three modes are supported: ReadWriteOnce, ReadWriteMany and ReadOnlyMany.

ReadWriteOnce indicates that pvc can only be consumed read and write by one pod at the same time.

ReadWriteMany can be consumed read and write by multiple pod at the same time

ReadOnlyMany indicates that it can be consumed read-only by multiple pod at the same time

Note: the access mode defined here is only a declaration at the orchestration level. Whether the specific application is readable or writable when reading and writing storage files needs to be determined by the specific storage plug-in implementation.

Storage: define the storage capacity expected by this PVC object. Similarly, the data size here is only the value declared by the orchestration, depending on the underlying storage service type.

VolumeMode: indicates the storage volume mount mode, and supports FileSystem and Block modes

FileSystem: the way data volumes are mounted as a file system for supply

Block: data volumes are mounted in the form of block devices for supply.

4) detailed definition of PV

The following is an example of orchestration of PV objects for cloud disk data volumes:

ApiVersion: v1kind: PersistentVolumemetadata: labels: failure-domain.beta.kubernetes.io/region: cn-shenzhen failure-domain.beta.kubernetes.io/zone: cn-shenzhen-e name: d-wz9g2j5qbo37r2lamkg4spec: accessModes:-ReadWriteOnce capacity: storage: 30Gi flexVolume: driver: alicloud/disk fsType: ext4 options: VolumeId: d-wz9g2j5qbo37r2lamkg4 persistentVolumeReclaimPolicy: Delete storageClassName: alicloud-disk-available volumeMode: Filesystem

AccessModes: access mode of storage volume. Three modes are supported: ReadWriteOnce, ReadWriteMany and ReadOnlyMany. The specific meaning is the same as the PVC field.

Capacity: define storage volume capacity

PersistentVolumeReclaimPolicy: defines the recycling policy, that is, how to deal with pvc when deleting PV; supports Delete and Retain. This parameter is described in detail in the dynamic data volume section.

StorageClassName: indicates the name of the storage class used by the storage volume. This parameter is specified in the dynamic data volume section.

VolumeMode: same as volumeMode definition in PVC

Flexvolume: this field indicates the specific storage type. Here Flexvolume is an abstract storage type, and the specific storage type and storage parameters are defined in the sub-configuration item of flexvolume.

5) PVC/PV binding

PVC can only be used by Pod after binding PV. The process of binding PV by PVC is the process of consuming PV. There are certain rules in this process, and only the PV that the following rules meet can be bound by PVC:

VolumeMode: the VolumeMode of the consumed PV needs to be consistent with the PVC

AccessMode: the AccessMode of the consumed PV needs to be consistent with the PVC

StorageClassName: if PVC defines this parameter, PV must have the relevant parameter definition before it can be bound

LabelSelector: select the appropriate PV binding from the PV list by label matching

Storage: the capacity of the consumed PV must be greater than or equal to the storage capacity requirement of PVC before it can be bound.

Only PV that meets all of the above needs can be bound by PVC.

If there are multiple PV to meet the demand at the same time, you need to choose a more suitable one from the PV to bind; usually choose the one with the smallest capacity, and if there are multiple ones with the smallest capacity, you will choose at random.

If there is no PV storage that meets the above requirements, the PVC will be in the Pending state and wait for the appropriate PV to appear before binding.

two。 Static, dynamic storage volum

We know from the above discussion that PVC is a secondary abstraction of storage for application services and has a concise storage definition interface. PV is a storage abstraction with tedious storage details, which is generally defined and maintained by special cluster managers.

Storage volumes can be divided into dynamic and static storage volumes according to how PV is created:

Static storage volumes: PV created by administrator

Dynamic storage volumes: PV created by the Provisioner plug-in

1) static storage volume

Generally, the cluster administrator analyzes the storage requirements in the cluster, allocates some storage media in advance, creates corresponding PV objects, and waits for PVC to consume the created PV objects. If the PVC requirement is defined in the load, K8s will bind the PVC to the matching PV through the relevant rules, thus realizing the application's ability to access the storage service.

2) dynamic storage volume

The cluster administrator configures the backend storage pool and creates the corresponding template (storageclass). When there is a PVC that needs to consume PV, according to the requirements defined by PVC and referring to the storage details of storageclass, the Provisioner plug-in dynamically creates a PV.

Comparison of the two volumes:

The final effect of both dynamic and static storage volumes is: Pod-> PVC-> PV usage link, and the specific template definition of the object is consistent.

Dynamic storage volumes differ from static storage volumes in that the plug-in automatically creates the PV, while the static volume creates the PV manually by the cluster administrator.

Provide the benefits of dynamic storage volumes:

Dynamic volumes let K8s realize the automatic life cycle management of PV, and the creation and deletion of PV are completed through Provisioner.

Automatic creation of PV objects reduces the complexity of configuration and the workload of system administrators

Dynamic volumes can realize that the required storage capacity of PVC is the same as that of PV from Provision, and the storage capacity planning is optimized.

3) the implementation process of dynamic volume

When a user declares a PVC, if the StorageClassName field is added to the PVC, the intention is: when the PVC cannot find a matching PV in the cluster, the corresponding Provisioner plug-in will be triggered according to the definition of StorageClassName to create an appropriate PV for binding, that is, to create a dynamic data volume; when a dynamic data volume is created by the Provisioner plug-in, it will be associated with the PVC through StorageClassName.

StorageClass can be translated as a storage class and represented as a template for creating PV storage volumes; when PVC triggers the automatic creation of PV, it is created using the contents of the StorageClass object. Its contents include: target Provisioner name, detailed parameters for creating PV, recycling mode and other configurations.

The StorageClasss template is defined as follows:

ApiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: alicloud-disk-topologyparameters: type: cloud_ssdprovisioner: diskplugin.csi.alibabacloud.comreclaimPolicy: DeleteallowVolumeExpansion: truevolumeBindingMode: WaitForFirstConsumer

Provisioner: the name of a registered plug-in that implements the function of creating a PV; a StorageClass can only define one Provisioner

Parameters: indicates the specific parameters for creating a data volume; for example, it means to create a cloud disk of SSD type.

ReclaimPolicy: used to specify the value of the persistentVolumeReclaimPolicy field for creating PV. Delete/Retain;Delete is supported to represent a dynamically created PV, which is also automatically destroyed when it is destroyed. Retain represents a dynamically created PV, which is not automatically destroyed, but is handled by the administrator.

AllowVolumeExpansion: defines whether the PV created by this storage class runs dynamic expansion. By default, whether false; can be dynamically expanded is realized by the underlying storage plug-in. This is just a switch.

VolumeBindingMode: indicates the time when the PV is created dynamically. Supporting Immediate/WaitForFirstConsumer; means immediate creation and delayed creation, respectively.

When a user creates a PVC declaration, he / she will look for a suitable PV in the cluster to bind. If no suitable PV is bound to it, the following process will be triggered:

Volume Provisioner will watch the existence of the PVC. If the PVC defines a StorageClassName and the Provisioner plug-in defined in the StorageClass object is itself, Provisioner will trigger the process of creating the PV.

Provisioner performs PV creation based on parameters defined by PVC (Size, VolumeMode, AccessModes) and parameters defined by StorageClass (ReclaimPolicy, Parameters)

Provisioner creates a data volume on the storage media side (via API call, or otherwise), and then creates a PV object

After the PV is created, the binding with the PVC is implemented to meet the subsequent Pod startup process.

4) delay binding of dynamic data volumes

Some storage (Ali cloud disk) is limited in mounting attributes. Only data volumes and Node nodes in the same availability zone can be mounted. They cannot be mounted if they are not in the same availability zone. This type of storage volume usually encounters the following problems:

The data volume of the An availability zone has been created, but the node resources in the An availability zone have been exhausted, causing Pod startup to fail to complete the mount.

When planning PVC and PV, the cluster administrator cannot determine which availability zones to create multiple PV for backup.

The volumeBindingMode field in StorageClass is used to solve this problem. If volumeBindingMode is configured with a value of WaitForFirstConsumer, it means that Provisioner will not create a data volume immediately when it receives a PVC Pending, but waits for the PVC to be consumed by Pod before executing the creation process.

The implementation principle is as follows:

Provisioner does not create a data volume immediately when it receives the PVC Pending status, but waits for the PVC to be consumed by Pod

If a Pod consumes this PVC and the scheduler finds that the PVC is delayed binding, the pv continues to complete the scheduling function (storage scheduling will be explained in more detail later), and the scheduler will patch the scheduling result to the metadata of the PVC

When Provisioner discovers that scheduling information is written into PVC, it obtains the location information (zone, Node) to create the target data volume according to the scheduling information, and triggers the creation process of PV.

As can be seen from the above process: delayed binding will first schedule the application load (make sure that there are sufficient resources for pod), and then trigger the creation process of dynamic volumes, which avoids the problem of no resources in the availability zone where the data volumes are located and the inaccuracy of storage planning.

In a multi-availability zone cluster environment, it is more recommended to use a dynamic volume scheme with delayed binding. Currently, Aliyun ACK cluster supports the above configuration scheme.

3. Use the example

The following is an example of pod consuming PVC and PV:

ApiVersion: v1kind: PersistentVolumeClaimmetadata: name: nas-pvcspec: accessModes:-ReadWriteOnce resources: requests: storage: 50Gi selector: matchLabels: alicloud-pvname: nas-csi-pv---apiVersion: v1kind: PersistentVolumemetadata: name: nas-csi-pv labels: alicloud-pvname: nas-csi-pvspec: capacity: storage: 50Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Retain flexVolume: driver: "alicloud/nas" options: server: "* * *-42ad.cn-shenzhen.extreme.nas.aliyuncs.com "path:" / share/nas "- apiVersion: apps/v1kind: Deploymentmetadata: name: deployment-nas labels: app: nginxspec: selector: matchLabels: app: nginx template: metadata: labels: app: nginxspec: containers:-name: nginx1 image: nginx:1.8-name: nginx2 image: nginx:1 .7.9 volumeMounts:-name: nas-pvc mountPath: "/ data" volumes:-name: nas-pvc persistentVolumeClaim: claimName: nas-pvc

Template parsing:

This application is a Nginx service choreographed in Deployment mode, and each pod contains 2 containers: nginx1 and nginx2

The Volumes field is defined in the template to indicate that the data volume is expected to be mounted for use by the application. In this example, PVC is used to define the data volume.

Internal application: Mount the data volume nas-pvc to the / data directory of the nginx2 container; the nginx1 container is not mounted

PVC (nas-pvc) is defined as a storage volume with no less than 50G capacity, ReadWriteOnce read and write mode, and a Label setting requirement for PV.

PV (nas-csi-pv) is defined as a storage volume with 50G capacity, ReadWriteOnce read and write mode, Retain recycling mode, Flexvolume abstract type, and Label configuration

According to the logic of binding PVC and PV, if this PV meets the requirements of PVC consumption, PVC will bind to this PV and mount it for pod.

The above content is how to understand container storage and K8s storage volumes in cloud native storage. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report