Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to parse the container storage interface CSI

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

How to parse the container storage interface CSI, many novices are not very clear about this. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.

Let's focus on the CSI (Container Storage Interface) container storage interface to explore what CSI is and how it works internally.

Background

K8s natively supports PV of some storage types, such as iSCSI, NFS, CephFS, and so on (see links for details). The storage code of these in-tree types is placed in the Kubernetes code repository. The problem here is that the K8s code is strongly coupled with the code of the three-party storage vendor:

To change the storage code of the in-tree type, the user must update the K8s component, which is expensive.

Bug in in-tree storage code will cause K8s component instability

The K8s community is responsible for maintaining and testing storage functions of the in-tree type

The in-tree storage plug-in enjoys the same privileges as the K8s core components, so there are security risks.

Three-party storage developers must follow the rules of the K8s community to develop in-tree type storage code

The emergence of CSI container storage interface standard solves the above problems, decoupling the three-party storage code from the K8s code, so that the R & D personnel of the three-party storage manufacturers only need to implement the CSI interface (there is no need to pay attention to whether the container platform is K8s or Swarm, etc.).

Introduction of CSI Core process

Before introducing the CSI components and their interfaces in detail, let's introduce the CSI storage process in K8s. The article "read K8s persistent storage process" introduces that Pod in K8s has to go through three stages when mounting storage volumes: Provision/Delete (create / delete disk), Attach/Detach (mount / unmount) and Mount/Unmount (mount / unmount). The following explains the process of K8s using CSI in these three stages.

1. Provisioning Volumes

1. The cluster administrator creates the StorageClass resource, which contains the CSI plug-in name (provisioner:pangu.csi.alibabacloud.com) and the parameters necessary for the storage class (parameters: type=cloud_ssd) in the StorageClass. The sc.yaml file is as follows:

two。 The user creates a PersistentVolumeClaim resource, and the PVC specifies the storage size and StorageClass (as above). The pvc.yaml file is as follows:

3. Volume controller (PersistentVolumeController) * * observed that the newly created PVC in the cluster does not have a matching PV, and the storage type it uses is out-of-tree, so type annotation:volume.beta.kubernetes.io/storage-provisioner= [out-of-tree CSI plug-in name] (in this case, provisioner:pangu.csi.alibabacloud.com) for PVC.

The 4.External Provisioner component observes that the annotation of PVC contains "volume.beta.kubernetes.io/storage-provisioner" and its value is itself, so it starts the disk creation process.

Get the relevant StorageClass resources and get parameters from them (in this case, parameters is type=cloud_ssd), which is used for later CSI function calls.

Call the CreateVolume function of the external CSI plug-in through unix domain socket.

5. When the external CSI plug-in returns successfully, it indicates that the disk has been created, and the External Provisioner component will create a PersistentVolume resource in the cluster.

6. The volume controller binds PV to PVC.

2. Attaching Volumes

The 1.AD controller (AttachDetachController) observes that the Pod using the CSI type PV is scheduled to a node, and the AD controller will call the Attach function of * * internal in-tree CSI plug-in (csiAttacher) * *.

2.The internal in-tree CSI plug-in (csiAttacher) * * creates a VolumeAttachment object into the cluster.

3.External Attacher observes the VolumeAttachment object and calls the ControllerPublish function of the external CSI plug-in to attach the volume to the corresponding node. After the external CSI plug-in is mounted successfully, External Attacher updates the .Status.attached of the related VolumeAttachment object to true.

The 4.AD controller internal in-tree CSI plug-in (csiAttacher) observes that the .Status.attached of the VolumeAttachment object is set to true, and updates the AD controller internal state (ActualStateOfWorld), which is displayed on the .Status.VolumesAttached of the Node resource.

3. Mounting Volumes

1.**Volume Manager (Kubelet component) observes that a new Pod using CSI type PV is dispatched to this node, and then calls the WaitForAttach function of the internal in-tree CSI plug-in (csiAttacher) * *.

2.The internal in-tree CSI plug-in (csiAttacher) * * waits for the state of the VolumeAttachment object in the cluster .Status.attached to become true.

The 3.in-tree CSI plug-in (csiAttacher) calls the MountDevice function, which internally calls the NodeStageVolume function of the external CSI plug-in through unix domain socket; then the plug-in (csiAttacher) calls the SetUp function of the internal in-tree CSI plug-in (csiMountMgr), which internally calls the NodePublishVolume function of the external CSI plug-in through unix domain socket.

4. Unmounting Volumes

1. The user deletes the relevant Pod.

2.Volume Manager (the Kubelet component) observes that the Pod containing the CSI storage volume is deleted, and calls the TearDown function of the internal in-tree CSI plug-in (csiMountMgr), which invokes the NodeUnpublishVolume function of the external CSI plug-in through unix domain socket.

The 3.Volume Manager (Kubelet component) calls the UnmountDevice function of the internal in-tree CSI plug-in (csiAttacher), which invokes the NodeUnpublishVolume function of the external CSI plug-in through unix domain socket.

5. Detaching Volumes

When the 1.AD controller observes that the Pod containing the CSI storage volume is deleted, the controller calls the Detach function of the * * internal in-tree CSI plug-in (csiAttacher) * *.

2.csiAttacher deletes the relevant VolumeAttachment objects in the cluster (but not immediately due to the presence of finalizer,va objects).

3.External Attacher observes that the DeletionTimestamp of the VolumeAttachment object in the cluster is not empty, so it calls the ControllerUnpublish function of the external CSI plug-in to remove the volume from the corresponding node. After the external CSI plug-in is removed successfully, External Attacher removes the finalizer field of the related VolumeAttachment object, and the VolumeAttachment object is completely deleted.

The internal in-tree CSI plug-in (csiAttacher) in the 4.AD controller observes that the VolumeAttachment object has been deleted and updates the internal state in the AD controller; at the same time, the AD controller updates the Node resource, and there is no relevant hook information on the .Status.VolumesAttached of the Node resource.

6. Deleting Volumes

1. The user deletes the relevant PVC.

The 2.External Provisioner component observes the PVC delete event and performs different actions according to PVC's recycling policy (Reclaim):

Delete: call the DeleteVolume function of the external CSI plug-in to delete the volume; once the volume is deleted successfully, Provisioner will delete the corresponding PV object in the cluster.

Retain:Provisioner does not perform volume deletion operations.

Introduction of CSI Sidecar components

In order to adapt K8s to the CSI standard, the community put the storage process logic related to K8s in the CSI Sidecar component.

1. Node Driver Registrar1) function

The Node-Driver-Registrar component registers the external CSI plug-in with Kubelet, allowing Kubelet to call the external CSI plug-in function through a specific Unix Domain Socket (Kubelet calls the external CSI plug-in's NodeGetInfo, NodeStageVolume, NodePublishVolume, NodeGetVolumeStats, and so on).

2) principle

The Node-Driver-Registrar component is registered through the Kubelet external plug-in registration mechanism. After successful registration:

Kubelet calls the NodeGetInfo function of the external CSI plug-in by annotation:Kubelet for the Node resource of this node, and the return values [nodeID] and [driverName] will be used as values for the "csi.volume.kubernetes.io/nodeid" key.

Kubelet update Node Label: use the [AccessibleTopology] value returned by the NodeGetInfo function for the node's Label.

Kubelet update Node Status: update the maxAttachLimit (the maximum number of volumes that can be mounted on the node) returned by the NodeGetInfo function to the Status.Allocatable:attachable-volumes-csi- [driverName] = [maxAttachLimit] of the Node resource.

Kubelet updates CSINode resources (if not, create): update [driverName], [nodeID], [maxAttachLimit], [AccessibleTopology] to Spec (topology retains only Key values).

2. External Provisioner1) function

Create / delete the actual storage volume and the PV resources that represent the storage volume.

2) principle

When External-Provisioner starts, you need to specify the parameter-provisioner, which specifies the Provisioner name, which corresponds to the provisioner field in StorageClass.

After External-Provisioner starts, it watch the PVC and PV resources in the cluster.

For PVC resources in the cluster:

To determine whether PVC needs to create storage volumes dynamically, the criteria are as follows:

Whether the "volume.beta.kubernetes.io/storage-provisioner" key (created by the volume controller) is included in the annotation of the PVC, and whether its value is equal to the Provisioner name.

If the VolumeBindingMode field of StorageClass corresponding to PVC is WaitForFirstConsumer, the annotation of PVC must contain the key "volume.kubernetes.io/selected-node" (see how the scheduler handles WaitForFirstConsumer), and its value is not empty; if Immediate, it means that Provisioner is required to provide dynamic storage volume immediately.

The CreateVolume function of the external CSI plug-in is called through a specific Unix Domain Socket.

Create a PV resource with the name of [PV prefix specified by Provisioner]-[PVC uuid].

For PV resources in the cluster:

To determine whether PV needs to be deleted, the criteria are as follows:

Determine whether its .Status.Phase is Release.

Determine whether its .Spec.PersistentVolumeReclaimPolicy is Delete.

Determine whether it contains annotation (pv.kubernetes.io/provisioned-by), and whether its value is its own.

The DeleteVolume interface of the external CSI plug-in is called through a specific Unix Domain Socket.

Delete PV resources from the cluster.

3. External Attacher1) function

Mount / remove the storage volume.

2) principle

* * External-Attacher * * will always watch the VolumeAttachment resources and PersistentVolume resources in the cluster.

For VolumeAttachment resources:

Get all the information about PV from VolumeAttachment resources, such as volume ID, node ID, mount Secret, and so on.

Determine whether the DeletionTimestamp field of the VolumeAttachment is empty to determine whether it is a volume mount or volume removal: if it is a volume mount, call the ControllerPublishVolume interface of the external CSI plug-in through a specific Unix Domain Socket; if the volume is removed, call the ControllerUnpublishVolume interface of the external CSI plug-in through the specific Unix Domain Socket.

For PersistentVolume resources:

Type the Finalizer:external-attacher/ [driver name] for the relevant PV when you hang up.

When PV is deleted (DeletionTimestamp is not empty), delete Finalizer:external-attacher/ [driver name].

4. External Resizer1) function

Expand the storage volume.

2) principle

External-Resizer internally watch the PersistentVolumeClaim resources in the cluster.

For PersistentVolumeClaim resources:

Determine whether the PersistentVolumeClaim resource needs to be expanded: the PVC status needs to be Bound and .Status.capacity is different from .Spec.Resources.requests.

Update the .Status.Conditions of PVC to indicate that you are in the Resizing state at this time.

The ControllerExpandVolume interface of the external CSI plug-in is called through a specific Unix Domain Socket.

Update the .Spec.capacity of PV.

If CSI supports online file system expansion, the NodeExpansionRequired field in the returned value of API ControllerExpandVolume is true,External-Resizer update PVC .Status.Conditions is FileSystemResizePending status; if not, the expansion is successful, External-Resizer updates PVC .Status.Conditions is empty, and update PVC .Status.capacity.

Volume Manager (Kubelet component) observed that the storage volume needs to be expanded online, so the file system expansion is realized by calling the NodeExpandVolume interface of the external CSI plug-in through a specific Unix Domain Socket.

5. Livenessprobe1) function

Check that the CSI plug-in is working.

2) principle

By exposing a / healthz HTTP port to serve the probe probe of the kubelet, the Probe interface of the external CSI plug-in is invoked through a specific Unix Domain Socket.

Introduction of CSI Interface

Three-party storage manufacturers need to implement three interfaces of CSI plug-ins: IdentityServer, ControllerServer and NodeServer.

1. IdentityServer

IdentityServer is mainly used to authenticate the identity information of the CSI plug-in.

/ / IdentityServer is the server API for Identity service.type IdentityServer interface {/ / get the information of the CSI plug-in, such as name, version number GetPluginInfo (context.Context, * GetPluginInfoRequest) (* GetPluginInfoResponse, error) / / obtain the capabilities provided by the CSI plug-in For example, whether to provide ControllerService capability GetPluginCapabilities (context.Context, * GetPluginCapabilitiesRequest) (* GetPluginCapabilitiesResponse, error) / / to obtain the health status of the CSI plug-in Probe (context.Context, * ProbeRequest) (* ProbeResponse, error)} 2. ControllerServer

ControllerServer is mainly responsible for creating / deleting storage volumes and snapshots as well as mounting / removing operations.

/ / ControllerServer is the server API for Controller service.type ControllerServer interface {/ / create storage volume CreateVolume (context.Context, * CreateVolumeRequest) (* CreateVolumeResponse, error) / / delete storage volume DeleteVolume (context.Context, * DeleteVolumeRequest) (* DeleteVolumeResponse, error) / / attach the storage volume to a specific node ControllerPublishVolume (context.Context, * ControllerPublishVolumeRequest) (* ControllerPublishVolumeResponse) Error) / / remove the storage volume from a specific node ControllerUnpublishVolume (context.Context, * ControllerUnpublishVolumeRequest) (* ControllerUnpublishVolumeResponse, error) / / verify whether the storage volume capacity meets the requirements For example, whether to support multi-read and multi-write ValidateVolumeCapabilities (context.Context, * ValidateVolumeCapabilitiesRequest) (* ValidateVolumeCapabilitiesResponse, error) / / enumerate all storage volume information ListVolumes (context.Context, * ListVolumesRequest) (* ListVolumesResponse, error) / / obtain the available space size of storage resource pool GetCapacity (context.Context, * GetCapacityRequest) (* GetCapacityResponse, error) / / obtain ControllerServer support feature points For example, whether snapshot capability ControllerGetCapabilities (context.Context, * ControllerGetCapabilitiesRequest) (* ControllerGetCapabilitiesResponse, error) / / create snapshot CreateSnapshot (context.Context, * CreateSnapshotRequest) (* CreateSnapshotResponse, error) / / delete snapshot DeleteSnapshot (context.Context, * DeleteSnapshotRequest) (* DeleteSnapshotResponse, error) / / get all snapshot information ListSnapshots (context.Context, * ListSnapshotsRequest) (* ListSnapshotsResponse) is supported Error) / / expand storage volume ControllerExpandVolume (context.Context, * ControllerExpandVolumeRequest) (* ControllerExpandVolumeResponse, error)} 3. NodeServer

NodeServer is mainly responsible for storage volume mount / unmount operations.

/ / NodeServer is the server API for Node service.type NodeServer interface {/ / format and mount the storage volume to the temporary global catalog NodeStageVolume (context.Context, * NodeStageVolumeRequest) (* NodeStageVolumeResponse, error) / / unmount the storage volume from the temporary global catalog NodeUnstageVolume (context.Context, * NodeUnstageVolumeRequest) (* NodeUnstageVolumeResponse, error) / / bind-mount the storage volume from the temporary global directory to the destination directory NodePublishVolume (context.Context * NodePublishVolumeRequest) (* NodePublishVolumeResponse, error) / / unmount the storage volume from the destination directory NodeUnpublishVolume (context.Context, * NodeUnpublishVolumeRequest) (* NodeUnpublishVolumeResponse, error) / / get the capacity information of the storage volume NodeGetVolumeStats (context.Context, * NodeGetVolumeStatsRequest) (* NodeGetVolumeStatsResponse, error) / / expand the storage volume NodeExpandVolume (context.Context, * NodeExpandVolumeRequest) (* NodeExpandVolumeResponse, error) / / get the NodeServer support feature point For example, whether it supports obtaining storage volume capacity information NodeGetCapabilities (context.Context, * NodeGetCapabilitiesRequest) (* NodeGetCapabilitiesResponse, error) / / obtaining CSI node information, such as the maximum number of volumes supported NodeGetInfo (context.Context, * NodeGetInfoRequest) (* NodeGetInfoResponse, error)} K8s CSI API object

K8s supports the CSI standard and contains the following API objects:

CSINode

CSIDriver

VolumeAttachment

1. CSINodeapiVersion: storage.k8s.io/v1beta1kind: CSINodemetadata: name: node-10.212.101.210spec: drivers:-name: yodaplugin.csi.alibabacloud.com nodeID: node-10.212.101.210 topologyKeys:-kubernetes.io/hostname-name: pangu.csi.alibabacloud.com nodeID: a5441fd9013042ee8104a674e4a9666a topologyKeys:-topology.pangu.csi.alibabacloud.com/zone

Function:

Determine whether the external CSI plug-in is registered successfully. After the Node Driver Registrar component has registered with Kubelet, Kubelet creates the resource, so there is no need to explicitly create the CSINode resource.

One-to-one correspondence between the Node resource name in Kubernetes and the node name (nodeID) in the three-party storage system. Here Kubelet calls the GetNodeInfo function of the external CSI plug-in NodeServer to get nodeID.

Displays volume topology information. The topologyKeys in CSINode is used to represent the topology information of the storage node, and the volume topology information will make the Scheduler select the appropriate storage node when Pod scheduling.

2. Whether the CSIDriverapiVersion: storage.k8s.io/v1beta1kind: CSIDrivermetadata: name: pangu.csi.alibabacloud.comspec: # plug-in supports volume mount (VolumeAttach) attachRequired: true # Mount phase whether the CSI plug-in requires Pod information podInfoOnMount: true # specify the volume mode supported by CSI volumeLifecycleModes:-Persistent

Function:

Simplify the discovery of external CSI plug-ins. Created by the cluster administrator, you can know which CSI plug-ins are available in the environment through kubectl get csidriver.

Custom Kubernetes behavior, such as some external CSI plug-ins do not need to perform volume mount (VolumeAttach) operation, you can set .spec.attachRequired to false.

3. VolumeAttachmentapiVersion: storage.k8s.io/v1kind: VolumeAttachmentmetadata: annotations: csi.alpha.kubernetes.io/node-id: 21481ae252a2457f9abcb86a3d02ba05 finalizers:-external-attacher/pangu-csi-alibabacloud-com name: csi-0996e5e9459e1ccc1b3a7aba07df4ef7301c8e283d99eabc1b69626b119ce750spec: attacher: pangu.csi.alibabacloud.com nodeName: node-10.212.101.241 source: persistentVolumeName: pangu-39aa24e7-8877-11eb-b02f-021234350de1status: attached: true

Function: VolumeAttachment records the mount / remove information of the storage volume as well as node information.

Support feature 1. Topology support

There is an AllowedTopologies field in StorageClass:

ApiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: csi-panguprovisioner: pangu.csi.alibabacloud.comparameters: type: cloud_ssdvolumeBindingMode: ImmediateallowedTopologies:- matchLabelExpressions:-key: topology.pangu.csi.alibabacloud.com/zone values:-zone-1-zone-2

After the external CSI plug-in is deployed, each node is marked with the [AccessibleTopology] value returned by the NodeGetInfo function (see the Node Driver Registrar section for details).

Before calling the CreateVolume interface of the CSI plug-in, External Provisioner sets the AccessibilityRequirements in the request parameter:

For WaitForFirstConsumer

When the anno of the PVC contains "volume.kubernetes.io/selected-node" and is not empty, the TopologyKeys of the corresponding node CSINode is obtained first, then the Values value is obtained from the Label of the Node resource according to the TopologyKeys key, and finally the Values value is compared with the AllowedTopologies of the StorageClass to determine whether it is included or not; if not, an error is reported.

For Immediately

Fill in the value of the AllowedTopologies of StorageClass, and if StorageClass does not set AllowedTopologies, add all nodes Value that contain the TopologyKeys key.

How does Scheduler handle scheduling using storage volumes

Community-based version 1.18 scheduler

The scheduling process of the scheduler mainly has the following three steps:

Pre-selected (Filter): filter the list of nodes that meet the Pod scheduling requirements.

Score: the node is scored by the internal optimization algorithm, and the node with the highest score is the selected node.

Bind: the scheduler notifies kube-apiserver of the scheduling result and updates the .spec.nodeName field of Pod.

Scheduler pre-selection phase: handles the PVC/PV binding of Pod and dynamic provisioning of PV (Dynamic Provisioning), and makes the scheduler consider the node affinity of PV used by Pod when scheduling. The detailed scheduling process is as follows:

Pod does not include PVC to skip directly.

FindPodVolumes

First check whether the existing PV in the environment can match the PVC (findMatchingVolumes), and record the PV that can match the PVC in the cache of the scheduler.

The PVC that does not match to the PV follows the dynamic scheduling process, and the dynamic scheduling mainly uses the AllowedTopologies field of the StorageClass to determine whether the current scheduling node meets the topology requirements (for PVC of WaitForFirstConsumer type).

BoundClaims: Bound PVC

The VolumeBindingMode of claimsToBind:PVC corresponding to StorageClass is VolumeBindingWaitForFirstConsumer.

The VolumeBindingMode of unboundClaimsImmediate:PVC corresponding to StorageClass is VolumeBindingImmediate.

Get the boundClaims, claimsToBind, and unboundClaimsImmediate of Pod.

If len (unboundClaimsImmediate) is not empty, it means that this kind of PVC needs to be bound to PV immediately (that is, immediately after the creation of PVC, dynamically create PV and bind it to PVC, the process is not scheduled). If the PVC is in the unbound phase, an error will be reported.

If len (boundClaims) is not empty, check whether the node affinity of PV corresponding to PVC conflicts with the Label of the current node. If there is a conflict, an error will be reported (you can check the PV topology of Immediate type).

If len (claimsToBind) is not empty

The scheduler optimization phase is not discussed.

Scheduler Assume phase

The scheduler will Assume PV/PVC first, then Assume Pod.

Make a deep copy of the Pod currently to be scheduled.

AssumePodVolumes (PVC for WaitForFirstConsumer type)

Change the PV information that has been Match in the scheduler cache: set annotation:pv.kubernetes.io/bound-by-controller= "yes".

Change the PVC in the scheduler cache that does not match the PV and set the annotation:volume.kubernetes.io/selected-node= [selected node].

Assume Pod over.

Change the .Spec.NodeName of the Pod in the scheduler cache to [selected node].

Scheduler Bind phase

BindPodVolumes:

Call the API of Kubernetes to update the PV/PVC resources in the cluster to make them consistent with the PV/PVC in the scheduler Cache.

Check the PV/PVC status:

Check that all PVC are already in the Bound state.

Check whether the NodeAffinity of all PV conflicts with the node Label.

The scheduler performs the Bind operation: call the API of Kubernetes to update the .Spec.NodeName field of Pod.

two。 Storage volume expansion

The volume expansion section has been mentioned in the External Resizer section, so I won't repeat it any more. Users only need to edit the .Spec.Resources.Requests.Storage field of PVC. Note that the capacity can only be expanded but not reduced.

If the expansion of PV fails, PVC cannot re-edit the storage of the spec field to the original value (only capacity expansion is allowed). Refer to the PVC restore method provided on the official website of K8s: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#recovering-from-failure-when-expanding-volumes

3. Limit on the number of volumes per node

The volume limit is mentioned in the Node Driver Registrar section, so I won't repeat it.

4. Storage volume monitoring

The storage provider needs to implement the NodeGetVolumeStats interface of the CSI plug-in, and Kubelet will call this function and reflect it on its metrics:

Kubelet_volume_stats_capacity_bytes: storage volume capacity

Kubelet_volume_stats_used_bytes: storage volume used capacity

Kubelet_volume_stats_available_bytes: usable capacity of storage volum

Kubelet_volume_stats_inodes: total storage volume inode

Kubelet_volume_stats_inodes_used: storage volume inode usage

Kubelet_volume_stats_inodes_free: storage volume inode remaining

5. Secret

CSI storage volumes support passing Secret to handle private data needed in different processes. Currently, StorageClass supports the following Parameter:

Csi.storage.k8s.io/provisioner-secret-name

Csi.storage.k8s.io/provisioner-secret-namespace

Csi.storage.k8s.io/controller-publish-secret-name

Csi.storage.k8s.io/controller-publish-secret-namespace

Csi.storage.k8s.io/node-stage-secret-name

Csi.storage.k8s.io/node-stage-secret-namespace

Csi.storage.k8s.io/node-publish-secret-name

Csi.storage.k8s.io/node-publish-secret-namespace

Csi.storage.k8s.io/controller-expand-secret-name

Csi.storage.k8s.io/controller-expand-secret-namespace

The Secret is included in the parameters of the corresponding CSI interface, or in the CreateVolumeRequest.Secrets for the CreateVolume interface.

6. Block device apiVersion: apps/v1kind: StatefulSetmetadata: name: nginx-examplespec: selector: matchLabels: app: nginx serviceName: "nginx" volumeClaimTemplates:-metadata: name: html spec: accessModes:-ReadWriteOnce volumeMode: Block storageClassName: csi-pangu resources: requests: storage: 40Gi template: metadata: labels: app: nginx spec: containers: -name: nginx image: nginx volumeDevices:-devicePath: "/ dev/vdb" name: html

Three-party storage vendors need to implement the NodePublishVolume interface. Kubernetes provides a toolkit ("k8s.io/kubernetes/pkg/util/mount") for block devices, whose EnsureBlock and MountBlock functions can be called during the NodePublishVolume phase.

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report