Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the problems in hostPath volume?

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the relevant knowledge of "what are the problems in hostPath volume". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Problems in hostPath volume

In the past, we used to use hostPath volume to enable Pod to use local storage to mount files or directories in the Node file system to the container, but the use of hostPath volume is very difficult and is not suitable for use in a production environment.

Let's first look at the types of hostPath Type:

ValueBehavior

Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume.DirectoryOrCreateIf nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet.DirectoryA directory must exist at the given pathFileOrCreateIf nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet.FileA file must exist at the given pathSocketA UNIX socket must exist at the given pathCharDeviceA character device must exist at the given pathBlockDeviceA block device must exist at the given path

It seems nice to support so many type, but why is it not suitable for use in a production environment?

Because of the differentiation of each node in the cluster, to use hostPath Volume, we need to schedule precisely through NodeSelector and so on. If there are more such things, you will get impatient.

Note that there are two types of hostPath: DirectoryOrCreate and FileOrCreate. When there is no corresponding File/Directory on Node, you need to make sure that kubelet has the permission to Create File/Directory on Node.

In addition, if the file or directory on the Node is created by root, after mounting it into the container, you usually have to make sure that the process in the container has permission to write to the file or directory, for example, you need to start the process as a root user and run it in the privileged container, or you need to modify the file permission configuration on the Node in advance.

Scheduler does not consider the size of the hostPath volume, and hostPath cannot declare the required storage size, so the storage considerations during scheduling need to be artificially checked and guaranteed.

StatefulSet cannot use hostPath volume, and the Helm Chart that has been written to use shared storage is not compatible with hostPath volume, and there are a lot of things that need to be modified, which is also very difficult.

Local persistent volume working mechanism

FEATURE STATE: Kubernetes v1.11 Beta

Local persistent volume is used to solve the defects of portability, disk accounting and and scheduling** faced by hostPath volume. PV Controller and Scheduler will do special logic processing to local PV so that when Pod re-schedule occurs when Pod uses local storage, it can be dispatched to the Node where local volume is located again.

Local pv also needs to be used cautiously in production. After all, it essentially uses local storage on the node. If there is no corresponding storage copy mechanism, it means that if the node or disk is abnormal, the Pod using the volume will also be abnormal, or even data loss will occur, unless you clearly know that this risk will not have a great impact on your application or allow data loss.

So what kind of situations do you usually use Local PV?

For example, the directory data on the node is mounted from remote network storage or read locally in advance, in order to accelerate the speed of reading these data by Pod, which is equivalent to acting as Cache. In this case, because it is read-only, there is no fear of data loss. There are ways in this kind of AI training that may be taken when it needs to be reused and the training data is huge.

You can also use local pv if the directory / disk on the local node is actually mounted by distributed storage with a copy / sharding mechanism (such as gluster, ceph, etc.).

Local volume allows local disk, partition, and directory to be mounted to a mount point in the container. In Kuberentes 1.11, only local pv's static provision, not dynamic provision, is still supported.

Kubernetes uses PersistentVolume's .spec.nodeAffinityfield to describe the binding relationship between local volume and Node.

Use the local-storage StorageClass of volumeBindingMode: WaitForFirstConsumer to implement the delayed binding of PVC, so that PV Controller does not Bound PVC immediately, but waits for some Pod that needs to use the local pv to complete the scheduling before doing Bound.

Here is the Sample that defines the local pv:

ApiVersion: v1kind: PersistentVolumemetadata: name: example-pvspec: capacity: storage: 100Gi # volumeMode field requires BlockVolume Alpha feature gate to be enabled. VolumeMode: Filesystem accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: / mnt/disks/ssd1 nodeAffinity: required: nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostname operator: In values:-example-node

The corresponding local-storage storageClass is defined as follows:

Kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: local-storageprovisioner: kubernetes.io/no-provisionervolumeBindingMode: WaitForFirstConsumer considerations for using local persistent volume

When using local pv, you must define that nodeAffinity,Kubernetes Scheduler needs to use the nodeAffinity description information of PV to ensure that Pod can be dispatched to the Node with the corresponding local volume.

VolumeMode can be FileSystem (Default) and Block, and enable BlockVolume Alpha feature gate is required.

Before creating a local PV, you need to make sure that a corresponding storageClass has been created. And the volumeBindingMode of the storageClass must be a WaitForFirstConsumer to identify the delay Volume Binding. WaitForFirstConsumer can not only guarantee normal Pod scheduling requirements (resource requirements, node selectors, Pod affinity, and Pod anti-affinity, etc.), but also ensure that the nodeAffinity of Local PV required by Pod can be met. In fact, there are two types of volumeBindingMode:

/ / VolumeBindingImmediate indicates that PersistentVolumeClaims should be / / immediately provisioned and bound. VolumeBindingImmediate VolumeBindingMode = "Immediate" / / VolumeBindingWaitForFirstConsumer indicates that PersistentVolumeClaims / / should not be provisioned and bound until the first Pod is created that / / references the PeristentVolumeClaim. The volume provisioning and / / binding will occur during Pod scheduing. VolumeBindingWaitForFirstConsumer VolumeBindingMode = "WaitForFirstConsumer"

The initialization of local volume on the node needs to be done manually (for example, local disk needs pre-partitioned, formatted, and mounted. The Directories corresponding to shared storage also needs pre-created), and the local PV is created manually. When the Pod ends, we also need to manually clean up the local volume, and then delete the local PV object manually. Therefore, persistentVolumeReclaimPolicy can only be Retain.

Local volume manager

So much of the above needs to be done artificially, so we must have a solution to help us automate the work of local volume's create and cleanup. The official has given a simple local volume manager, note that it is still just a static provisioner, and currently it mainly helps us to do two things:

Local volume manager monitors the new mount point of the configured discovery directory and creates a PersistentVolume object for each mount point based on the corresponding storageClassName, path, nodeAffinity, and and capacity.

When Pod ends and deletes the PVC,local volume manager that uses local volume, it automatically cleans up all files on that local mount, and then deletes the corresponding PersistentVolume object.

Therefore, in addition to the need to manually complete the mount operation of local volume, the lifecycle management of local PV is left to local volume manager. Let's specifically introduce this Static Local Volume Provisioner.

This is the end of the content of "what are the problems in hostPath volume?" thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report